lbourdois commited on
Commit
426a928
·
verified ·
1 Parent(s): 4390330

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +41 -29
README.md CHANGED
@@ -1,29 +1,41 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-3B
3
- language:
4
- - en
5
- license: other
6
- license_name: qwen-research
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
- tags:
10
- - chat
11
- - mlx
12
- ---
13
-
14
- # mlx-community/Qwen2.5-3B-Instruct-8bit
15
-
16
- The Model [mlx-community/Qwen2.5-3B-Instruct-8bit](https://huggingface.co/mlx-community/Qwen2.5-3B-Instruct-8bit) was converted to MLX format from [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using mlx-lm version **0.18.1**.
17
-
18
- ## Use with mlx
19
-
20
- ```bash
21
- pip install mlx-lm
22
- ```
23
-
24
- ```python
25
- from mlx_lm import load, generate
26
-
27
- model, tokenizer = load("mlx-community/Qwen2.5-3B-Instruct-8bit")
28
- response = generate(model, tokenizer, prompt="hello", verbose=True)
29
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-3B
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ license: other
18
+ license_name: qwen-research
19
+ license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
20
+ pipeline_tag: text-generation
21
+ tags:
22
+ - chat
23
+ - mlx
24
+ ---
25
+
26
+ # mlx-community/Qwen2.5-3B-Instruct-8bit
27
+
28
+ The Model [mlx-community/Qwen2.5-3B-Instruct-8bit](https://huggingface.co/mlx-community/Qwen2.5-3B-Instruct-8bit) was converted to MLX format from [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using mlx-lm version **0.18.1**.
29
+
30
+ ## Use with mlx
31
+
32
+ ```bash
33
+ pip install mlx-lm
34
+ ```
35
+
36
+ ```python
37
+ from mlx_lm import load, generate
38
+
39
+ model, tokenizer = load("mlx-community/Qwen2.5-3B-Instruct-8bit")
40
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
41
+ ```