Text Generation
Transformers
Safetensors
English
olmo2
conversational
vwxyzjn commited on
Commit
6568974
·
verified ·
1 Parent(s): 99e436d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,7 +20,7 @@ Upon the initial release of OLMo-2 models, we realized the post-trained models d
20
 
21
  ## Release Documentation
22
 
23
- OLMo 2 7B Instruct November 2024 is post-trained variant of the [OLMo-2 7B November 2024](https://huggingface.co/allenai/OLMo2-7B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
  Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
@@ -44,7 +44,7 @@ The core models released in this batch include the following:
44
  - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
  - **Language(s) (NLP):** Primarily English
46
  - **License:** Apache 2.0
47
- - **Finetuned from model:** allenai/OLMo-2-7B-1124-DPO
48
 
49
  ### Model Sources
50
 
@@ -71,7 +71,7 @@ To load the model with HuggingFace, use the following snippet:
71
  ```
72
  from transformers import AutoModelForCausalLM
73
 
74
- olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B-Instruct")
75
  ```
76
 
77
  ### Chat template
 
20
 
21
  ## Release Documentation
22
 
23
+ OLMo 2 13B Instruct November 2024 is post-trained variant of the [OLMo-2 13B November 2024](https://huggingface.co/allenai/OLMo2-13B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
  Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
 
44
  - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
  - **Language(s) (NLP):** Primarily English
46
  - **License:** Apache 2.0
47
+ - **Finetuned from model:** allenai/OLMo-2-13B-1124-DPO
48
 
49
  ### Model Sources
50
 
 
71
  ```
72
  from transformers import AutoModelForCausalLM
73
 
74
+ olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-13B-Instruct")
75
  ```
76
 
77
  ### Chat template