lewtun HF staff commited on
Commit
a093c19
·
verified ·
1 Parent(s): 64ce6a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -12,7 +12,10 @@ library_name: transformers
12
 
13
  # Model Card for OlympicCoder-7B
14
 
15
- OlympicCoder-7B is a code model that achieves strong performance on competitive coding benchmarks such as LiveCodeBench and the 2024 International Olympiad in Informatics.
 
 
 
16
 
17
  ## Model description
18
 
@@ -52,6 +55,8 @@ print(outputs[0]["generated_text"])
52
  #<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
53
  ```
54
 
 
 
55
 
56
  ## Training procedure
57
  ### Training hyper-parameters
 
12
 
13
  # Model Card for OlympicCoder-7B
14
 
15
+ OlympicCoder-7B is a code model that achieves strong performance on competitive coding benchmarks such as LiveCodeBench and the 2024 International Olympiad in Informatics.
16
+
17
+ * Repository: https://github.com/huggingface/open-r1
18
+ * Blog post: https://huggingface.co/blog/open-r1/update-3
19
 
20
  ## Model description
21
 
 
55
  #<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
56
  ```
57
 
58
+ > [!WARNING]
59
+ > To ensure that the model consistently outputs a long chain-of-thought, we have edited the chat template to prefill the first assistant turn with a `<think>` token. As a result, the outputs from this model will not show the opening `<think>` token if you use the model's `generate()` method. To apply reinforcement learning with a format reward, either prepend the `<think>` token to the model's completions or amend the chat template to remove the prefill.
60
 
61
  ## Training procedure
62
  ### Training hyper-parameters