lewtun HF staff commited on
Commit
9d7933b
·
verified ·
1 Parent(s): 4caebe9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -2
README.md CHANGED
@@ -18,8 +18,111 @@ language:
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
  should probably proofread and complete it, then remove this comment. -->
20
 
21
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/huggingface/h4/runs/8n1h8p0v)
22
- # sft_deepseek-math-7b_aimo_v53.24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) on the AI-MO/numina-dataset-v1.0-release-candidate-1-preproc dataset.
25
  It achieves the following results on the evaluation set:
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
  should probably proofread and complete it, then remove this comment. -->
20
 
21
+ # Model Card for NuminaMath 7B CoT
22
+
23
+ NuminaMath is a series of language models that are trained with two stages of supervised fine-tuning to solve math problems using chain of thought (CoT) and tool-integrated reasoning (TIR):
24
+
25
+ * **Stage 1:** fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate reasoning.
26
+ * **Stage 2:** fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed [Microsoft’s ToRA paper](https://arxiv.org/abs/2309.17452) and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
27
+
28
+ NuminaMath 7B CoT is the model from Stage 1 and was fine-tuned on [AI-MO/NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT), a large-scale dataset of 860k+ math competition problem-solution pairs.
29
+
30
+ ## Model description
31
+
32
+ - **Model type:** A 7B parameter math LLM fine-tuned on a dataset with 860k+ math problem-solution pairs.
33
+ - **Language(s) (NLP):** Primarily English
34
+ - **License:** Apache 2.0
35
+ - **Finetuned from model:** [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base)
36
+
37
+ ### Model Sources
38
+
39
+ <!-- Provide the basic links for the model. -->
40
+
41
+ - **Repository:** https://github.com/project-numina/aimo-progress-prize
42
+ - **Demo:** https://huggingface.co/spaces/AI-MO/math-olympiad-solver
43
+
44
+ ## Intended uses & limitations
45
+
46
+ Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
47
+
48
+ ```python
49
+ import re
50
+ import torch
51
+ from transformers import pipeline
52
+
53
+ pipe = pipeline("text-generation", model="AI-MO/NuminaMath-7B-TIR", torch_dtype=torch.bfloat16, device_map="auto")
54
+
55
+ messages = [
56
+ {"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
57
+ ]
58
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
59
+
60
+ gen_config = {
61
+ "max_new_tokens": 1024,
62
+ "do_sample": False,
63
+ "stop_strings": ["```output"], # Generate until Python code block is complete
64
+ "tokenizer": pipe.tokenizer,
65
+ }
66
+
67
+ outputs = pipe(prompt, **gen_config)
68
+ text = outputs[0]["generated_text"]
69
+ print(text)
70
+
71
+ # WARNING: This code will execute the python code in the string. We show this for eductional purposes only.
72
+ # Please refer to our full pipeline for a safer way to execute code.
73
+ python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
74
+ exec(python_code)
75
+ ```
76
+
77
+ The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.
78
+
79
+ ## Bias, Risks, and Limitations
80
+
81
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
82
+
83
+ NuminaMath 7B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of [AMC 12](https://artofproblemsolving.com/wiki/index.php/2023_AMC_12A_Problems), but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
84
+
85
+
86
+ ## Training procedure
87
+
88
+ ### Training hyperparameters
89
+
90
+ The following hyperparameters were used during training:
91
+ - learning_rate: 2e-05
92
+ - train_batch_size: 4
93
+ - eval_batch_size: 8
94
+ - seed: 42
95
+ - distributed_type: multi-GPU
96
+ - num_devices: 8
97
+ - total_train_batch_size: 32
98
+ - total_eval_batch_size: 64
99
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
100
+ - lr_scheduler_type: cosine
101
+ - lr_scheduler_warmup_ratio: 0.1
102
+ - num_epochs: 4.0
103
+
104
+
105
+ ### Framework versions
106
+
107
+ - Transformers 4.40.1
108
+ - Pytorch 2.3.1
109
+ - Datasets 2.18.0
110
+ - Tokenizers 0.19.1
111
+
112
+ ## Citation
113
+
114
+ If you find NuminaMath 7B TIR is useful in your work, please cite it with:
115
+
116
+ ```
117
+ @misc{numina_math_7b,
118
+ author = {Edward Beeching and Shengyi Costa Huang and Albert Jiang and Jia Li and Benjamin Lipkin and Zihan Qina and Kashif Rasul and Ziju Shen and Roman Soletskyi and Lewis Tunstall},
119
+ title = {NuminaMath 7B TIR},
120
+ year = {2024},
121
+ publisher = {Numina & Hugging Face},
122
+ journal = {Hugging Face repository},
123
+ howpublished = {\url{https://huggingface.co/AI-MO/NuminaMath-7B-TIR}}
124
+ }
125
+ ```
126
 
127
  This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) on the AI-MO/numina-dataset-v1.0-release-candidate-1-preproc dataset.
128
  It achieves the following results on the evaluation set: