Improve Model Card: Add Paper Link, Code Link, and Usage Instructions

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +28 -85
README.md CHANGED
@@ -1,53 +1,36 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
 
5
  metrics:
6
  - accuracy
7
- base_model: BitStarWalkin/SuperCorrect-7B
8
- library_name: transformers
9
  tags:
10
  - llama-cpp
11
  - gguf-my-repo
 
12
  ---
13
 
14
  # Triangle104/SuperCorrect-7B-Q4_K_S-GGUF
15
- This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
- Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the model.
17
-
18
- ---
19
- Model details:
20
- -
21
-
22
-
23
- SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights Ling Yang*, Zhaochen Yu*, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez,Bin Cui, Shuicheng Yan
24
-
25
- Peking University, Skywork AI, UC Berkeley, Stanford University
26
 
27
- Introduction
28
- -
29
 
30
- This repo provides the official implementation of SuperCorrect a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs.
31
 
32
- Notably, our SupperCorrect-7B model significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models.
33
- 🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
34
- Examples
35
 
36
- 🚨 For more concise and clear presentation, we omit some XML tags.
37
- Model details
38
 
39
- You can check our Github repo for more details.
40
- Quick Start
41
- Requirements
42
 
43
- Since our current model is based on Qwen2.5-Math series, transformers>=4.37.0 is needed for Qwen2.5-Math models. The latest version is recommended.
44
 
45
- 🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.
46
 
47
- Inference
48
- -
49
- 🤗 Hugging Face Transformers
50
 
 
51
  from transformers import AutoModelForCausalLM, AutoTokenizer
52
 
53
  model_name = "BitStarWalkin/SuperCorrect-7B"
@@ -62,9 +45,9 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
62
 
63
  prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
64
  hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
65
- # HT
66
  messages = [
67
- {"role": "system", "content":hierarchical_prompt },
68
  {"role": "user", "content": prompt}
69
  ]
70
 
@@ -85,67 +68,27 @@ generated_ids = [
85
 
86
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
87
  print(response)
 
 
 
 
 
88
 
89
- Performance
90
- -
91
- We evaluate our SupperCorrect-7B on two widely used English math benchmarks GSM8K and MATH. All evaluations are tested with our evaluation method which is zero-shot hierarchical thought based prompting.
92
-
93
- Citation
94
- -
95
- @article{yang2024supercorrect,
96
- title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights}
97
- author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
98
- journal={arXiv preprint arXiv:2410.09008},
99
- year={2024}
100
- }
101
- @article{yang2024buffer,
102
- title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
103
- author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
104
- journal={arXiv preprint arXiv:2406.04271},
105
- year={2024}
106
- }
107
-
108
- Acknowledgements
109
- -
110
- Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like Qwen2.5-Math, DeepSeek-Math, Llama3-Series. Our evaluation method is based on the code base of outstanding works like Qwen2.5-Math and lm-evaluation-harness. We also want to express our gratitude for amazing works such as BoT which provides the idea of thought template.
111
 
112
- ---
113
  ## Use with llama.cpp
114
- Install llama.cpp through brew (works on Mac and Linux)
115
 
116
- ```bash
117
- brew install llama.cpp
118
 
119
- ```
120
- Invoke the llama.cpp server or the CLI.
121
 
122
- ### CLI:
123
- ```bash
124
- llama-cli --hf-repo Triangle104/SuperCorrect-7B-Q4_K_S-GGUF --hf-file supercorrect-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
125
- ```
126
 
127
- ### Server:
128
- ```bash
129
- llama-server --hf-repo Triangle104/SuperCorrect-7B-Q4_K_S-GGUF --hf-file supercorrect-7b-q4_k_s.gguf -c 2048
130
- ```
131
 
132
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
133
 
134
- Step 1: Clone llama.cpp from GitHub.
135
- ```
136
- git clone https://github.com/ggerganov/llama.cpp
137
- ```
138
 
139
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
140
- ```
141
- cd llama.cpp && LLAMA_CURL=1 make
142
- ```
143
 
144
- Step 3: Run inference through the main binary.
145
- ```
146
- ./llama-cli --hf-repo Triangle104/SuperCorrect-7B-Q4_K_S-GGUF --hf-file supercorrect-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
147
- ```
148
- or
149
- ```
150
- ./llama-server --hf-repo Triangle104/SuperCorrect-7B-Q4_K_S-GGUF --hf-file supercorrect-7b-q4_k_s.gguf -c 2048
151
- ```
 
1
  ---
2
+ base_model: BitStarWalkin/SuperCorrect-7B
3
  language:
4
  - en
5
+ library_name: transformers
6
+ license: apache-2.0
7
  metrics:
8
  - accuracy
 
 
9
  tags:
10
  - llama-cpp
11
  - gguf-my-repo
12
+ pipeline_tag: question-answering
13
  ---
14
 
15
  # Triangle104/SuperCorrect-7B-Q4_K_S-GGUF
 
 
 
 
 
 
 
 
 
 
 
16
 
17
+ This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the original model. This version is specifically designed for use with `llama.cpp`.
 
18
 
19
+ ## SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
20
 
21
+ [Paper](https://hf.co/papers/2410.09008) | [Code](https://github.com/YangLing0818/SuperCorrect-llm)
 
 
22
 
23
+ This model uses a novel two-stage fine-tuning method to improve reasoning accuracy and self-correction ability for LLMs, particularly in mathematical reasoning. It incorporates hierarchical thought templates ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) for more deliberate reasoning.
 
24
 
25
+ Notably, SuperCorrect-7B significantly surpasses DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving state-of-the-art performance among 7B models.
 
 
26
 
27
+ ## Usage
28
 
29
+ This model can be used with `transformers` or `vLLM`. See examples below.
30
 
31
+ ### Usage with `transformers`
 
 
32
 
33
+ ```python
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
 
36
  model_name = "BitStarWalkin/SuperCorrect-7B"
 
45
 
46
  prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
47
  hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
48
+
49
  messages = [
50
+ {"role": "system", "content": hierarchical_prompt},
51
  {"role": "user", "content": prompt}
52
  ]
53
 
 
68
 
69
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
70
  print(response)
71
+ ```
72
+
73
+ ### Usage with `vLLM`
74
+
75
+ (Example code from the Github README)
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
 
78
  ## Use with llama.cpp
79
+ (Instructions from the original README - retained)
80
 
 
 
81
 
82
+ ## Evaluation
 
83
 
84
+ (Evaluation information from the original README - retained)
 
 
 
85
 
 
 
 
 
86
 
87
+ ## Citation
88
 
89
+ (Citation information from the original README - retained)
 
 
 
90
 
 
 
 
 
91
 
92
+ ## Acknowledgements
93
+
94
+ (Acknowledgements from the original README - retained)