Mungert commited on
Commit
9b72dda
Β·
verified Β·
1 Parent(s): 264003d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +236 -0
README.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ base_model: Qwen/Qwen2.5-3B
9
+ tags:
10
+ - chat
11
+ library_name: transformers
12
+ ---
13
+
14
+ # <span style="color: #7FFF7F;">Qwen2.5-3B-Instruct GGUF Models</span>
15
+
16
+ ## **Choosing the Right Model Format**
17
+
18
+ Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
19
+
20
+ ### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
21
+ - A 16-bit floating-point format designed for **faster computation** while retaining good precision.
22
+ - Provides **similar dynamic range** as FP32 but with **lower memory usage**.
23
+ - Recommended if your hardware supports **BF16 acceleration** (check your device’s specs).
24
+ - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
25
+
26
+ πŸ“Œ **Use BF16 if:**
27
+ βœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
28
+ βœ” You want **higher precision** while saving memory.
29
+ βœ” You plan to **requantize** the model into another format.
30
+
31
+ πŸ“Œ **Avoid BF16 if:**
32
+ ❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
33
+ ❌ You need compatibility with older devices that lack BF16 optimization.
34
+
35
+ ---
36
+
37
+ ### **F16 (Float 16) – More widely supported than BF16**
38
+ - A 16-bit floating-point **high precision** but with less of range of values than BF16.
39
+ - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
40
+ - Slightly lower numerical precision than BF16 but generally sufficient for inference.
41
+
42
+ πŸ“Œ **Use F16 if:**
43
+ βœ” Your hardware supports **FP16** but **not BF16**.
44
+ βœ” You need a **balance between speed, memory usage, and accuracy**.
45
+ βœ” You are running on a **GPU** or another device optimized for FP16 computations.
46
+
47
+ πŸ“Œ **Avoid F16 if:**
48
+ ❌ Your device lacks **native FP16 support** (it may run slower than expected).
49
+ ❌ You have memory limitations.
50
+
51
+ ---
52
+
53
+ ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
54
+ Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
55
+ - **Lower-bit models (Q4_K)** β†’ **Best for minimal memory usage**, may have lower precision.
56
+ - **Higher-bit models (Q6_K, Q8_0)** β†’ **Better accuracy**, requires more memory.
57
+
58
+ πŸ“Œ **Use Quantized Models if:**
59
+ βœ” You are running inference on a **CPU** and need an optimized model.
60
+ βœ” Your device has **low VRAM** and cannot load full-precision models.
61
+ βœ” You want to reduce **memory footprint** while keeping reasonable accuracy.
62
+
63
+ πŸ“Œ **Avoid Quantized Models if:**
64
+ ❌ You need **maximum accuracy** (full-precision models are better for this).
65
+ ❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
66
+
67
+ ---
68
+
69
+ ### **Summary Table: Model Format Selection**
70
+
71
+ | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
72
+ |--------------|------------|---------------|----------------------|---------------|
73
+ | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
74
+ | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn’t available |
75
+ | **Q4_K** | Low | Very Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
76
+ | **Q6_K** | Medium Low | Low | CPU with more memory | Better accuracy while still being quantized |
77
+ | **Q8** | Medium | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
78
+
79
+
80
+ ## **Included Files & Details**
81
+
82
+ ### `Qwen2.5-3B-Instruct-bf16.gguf`
83
+ - Model weights preserved in **BF16**.
84
+ - Use this if you want to **requantize** the model into a different format.
85
+ - Best if your device supports **BF16 acceleration**.
86
+
87
+ ### `Qwen2.5-3B-Instruct-f16.gguf`
88
+ - Model weights stored in **F16**.
89
+ - Use if your device supports **FP16**, especially if BF16 is not available.
90
+
91
+ ### `Qwen2.5-3B-Instruct-bf16-q8_0.gguf`
92
+ - **Output & embeddings** remain in **BF16**.
93
+ - All other layers quantized to **Q8_0**.
94
+ - Use if your device supports **BF16** and you want a quantized version.
95
+
96
+ ### `Qwen2.5-3B-Instruct-f16-q8_0.gguf`
97
+ - **Output & embeddings** remain in **F16**.
98
+ - All other layers quantized to **Q8_0**.
99
+
100
+ ### `Qwen2.5-3B-Instruct-q4_k.gguf`
101
+ - **Output & embeddings** quantized to **Q8_0**.
102
+ - All other layers quantized to **Q4_K**.
103
+ - Good for **CPU inference** with limited memory.
104
+
105
+ ### `Qwen2.5-3B-Instruct-q4_k_s.gguf`
106
+ - Smallest **Q4_K** variant, using less memory at the cost of accuracy.
107
+ - Best for **very low-memory setups**.
108
+
109
+ ### `Qwen2.5-3B-Instruct-q6_k.gguf`
110
+ - **Output & embeddings** quantized to **Q8_0**.
111
+ - All other layers quantized to **Q6_K** .
112
+
113
+
114
+ ### `Qwen2.5-3B-Instruct-q8_0.gguf`
115
+ - Fully **Q8** quantized model for better accuracy.
116
+ - Requires **more memory** but offers higher precision
117
+
118
+ # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span>
119
+
120
+ Please click like ❀ . Also I’d really appreciate it if you could test my Network Monitor Assistant at πŸ‘‰ [Network Monitor Assitant](https://freenetworkmonitor.click/dashboard).
121
+
122
+ πŸ’¬ Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM.
123
+
124
+ ### What I'm Testing
125
+
126
+ I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".
127
+
128
+ 🟑 **TestLLM** – Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a timeβ€”still working on scaling!). If you're curious, I'd be happy to share how it works! .
129
+
130
+ ### The other Available AI Assistants
131
+
132
+ 🟒 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://freenetworkmonitor.click) or [Download](https://freenetworkmonitor.click/download) the Free Network Monitor agent to get more tokens, Alternatively use the FreeLLM .
133
+
134
+ πŸ”΅ **FreeLLM** – Runs **open-source Hugging Face models** Medium speed (unlimited, subject to Hugging Face API availability).
135
+
136
+
137
+
138
+
139
+ # Qwen2.5-3B-Instruct
140
+
141
+ ## Introduction
142
+
143
+ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
144
+
145
+ - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
146
+ - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
147
+ - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
148
+ - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
149
+
150
+ **This repo contains the instruction-tuned 3B Qwen2.5 model**, which has the following features:
151
+ - Type: Causal Language Models
152
+ - Training Stage: Pretraining & Post-training
153
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
154
+ - Number of Parameters: 3.09B
155
+ - Number of Paramaters (Non-Embedding): 2.77B
156
+ - Number of Layers: 36
157
+ - Number of Attention Heads (GQA): 16 for Q and 2 for KV
158
+ - Context Length: Full 32,768 tokens and generation 8192 tokens
159
+
160
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
161
+
162
+ ## Requirements
163
+
164
+ The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
165
+
166
+ With `transformers<4.37.0`, you will encounter the following error:
167
+ ```
168
+ KeyError: 'qwen2'
169
+ ```
170
+
171
+ ## Quickstart
172
+
173
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
174
+
175
+ ```python
176
+ from transformers import AutoModelForCausalLM, AutoTokenizer
177
+
178
+ model_name = "Qwen/Qwen2.5-3B-Instruct"
179
+
180
+ model = AutoModelForCausalLM.from_pretrained(
181
+ model_name,
182
+ torch_dtype="auto",
183
+ device_map="auto"
184
+ )
185
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
186
+
187
+ prompt = "Give me a short introduction to large language model."
188
+ messages = [
189
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
190
+ {"role": "user", "content": prompt}
191
+ ]
192
+ text = tokenizer.apply_chat_template(
193
+ messages,
194
+ tokenize=False,
195
+ add_generation_prompt=True
196
+ )
197
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
198
+
199
+ generated_ids = model.generate(
200
+ **model_inputs,
201
+ max_new_tokens=512
202
+ )
203
+ generated_ids = [
204
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
205
+ ]
206
+
207
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
208
+ ```
209
+
210
+
211
+ ## Evaluation & Performance
212
+
213
+ Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen2.5/).
214
+
215
+ For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
216
+
217
+ ## Citation
218
+
219
+ If you find our work helpful, feel free to give us a cite.
220
+
221
+ ```
222
+ @misc{qwen2.5,
223
+ title = {Qwen2.5: A Party of Foundation Models},
224
+ url = {https://qwenlm.github.io/blog/qwen2.5/},
225
+ author = {Qwen Team},
226
+ month = {September},
227
+ year = {2024}
228
+ }
229
+
230
+ @article{qwen2,
231
+ title={Qwen2 Technical Report},
232
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
233
+ journal={arXiv preprint arXiv:2407.10671},
234
+ year={2024}
235
+ }
236
+ ```