Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,9 @@ pipeline_tag: text-generation
|
|
13 |
|
14 |
OlympicCoder-32B is a code mode that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics.
|
15 |
|
|
|
|
|
|
|
16 |
## Model description
|
17 |
|
18 |
- **Model type:** A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
|
@@ -52,6 +55,9 @@ print(outputs[0]["generated_text"])
|
|
52 |
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
|
53 |
```
|
54 |
|
|
|
|
|
|
|
55 |
|
56 |
## Training procedure
|
57 |
### Training hyper-parameters
|
|
|
13 |
|
14 |
OlympicCoder-32B is a code mode that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics.
|
15 |
|
16 |
+
* Repository: https://github.com/huggingface/open-r1
|
17 |
+
* Blog post: https://huggingface.co/blog/open-r1/update-3
|
18 |
+
|
19 |
## Model description
|
20 |
|
21 |
- **Model type:** A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
|
|
|
55 |
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
|
56 |
```
|
57 |
|
58 |
+
> [!IMPORTANT]
|
59 |
+
> To ensure that the model consistently outputs a long chain-of-thought, we have edited the chat template to prefill the first assistant turn with a `<think>` token. As a result, the outputs from this model will not show the opening `<think>` token if you use the model's `generate()` method. To apply reinforcement learning with a format reward, either prepend the `<think>` token to the model's completions or amend the chat template to remove the prefill. Check out our [blog post](https://huggingface.co/blog/open-r1/update-3#lesson-4-prefill-with-think-to-consistently-enable-long-cot) for more details.
|
60 |
+
|
61 |
|
62 |
## Training procedure
|
63 |
### Training hyper-parameters
|