Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: text-generation
|
|
11 |
|
12 |
<img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
|
13 |
|
14 |
-
OLMo 2 32B
|
15 |
Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
|
16 |
Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
|
17 |
|
@@ -119,7 +119,6 @@ See the Falcon 180B model card for an example of this.
|
|
119 |
- **PopQA**: Popular Question Answering
|
120 |
- **TruthQA**: Truthfulness in question answering
|
121 |
|
122 |
-
*Note: Replace the "Your Model" row with your model's evaluation results. You can add additional information about your model's performance compared to others.*
|
123 |
|
124 |
## License and use
|
125 |
|
|
|
11 |
|
12 |
<img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
|
13 |
|
14 |
+
OLMo 2 32B DPO March 2025 is post-trained variant of the [OLMo-2 32B March 2025](https://huggingface.co/allenai/OLMo-2-0325-32B/) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-0325-32b-preference-mix).
|
15 |
Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
|
16 |
Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
|
17 |
|
|
|
119 |
- **PopQA**: Popular Question Answering
|
120 |
- **TruthQA**: Truthfulness in question answering
|
121 |
|
|
|
122 |
|
123 |
## License and use
|
124 |
|