Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,6 @@ tags:
|
|
| 15 |
|
| 16 |
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
|
| 17 |
|
| 18 |
-
- Pretrained on our **latest large-scale dataset**, encompassing up to **18T tokens**.
|
| 19 |
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
|
| 20 |
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
|
| 21 |
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
|
|
@@ -76,7 +75,7 @@ generated_ids = [
|
|
| 76 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 77 |
```
|
| 78 |
|
| 79 |
-
##
|
| 80 |
|
| 81 |
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
|
| 82 |
|
|
|
|
| 15 |
|
| 16 |
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
|
| 17 |
|
|
|
|
| 18 |
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
|
| 19 |
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
|
| 20 |
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
|
|
|
|
| 75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 76 |
```
|
| 77 |
|
| 78 |
+
## Evaluation & Performance
|
| 79 |
|
| 80 |
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
|
| 81 |
|