doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.05653
33
Table 4: The table compiles all the out-of-domain evaluation results. Results marked as † are copied from other papers, which can be found on paperswithcode leaderboards. and MATH datasets. Therefore, WizardMath’s results are highly competitive on these two datasets. However, the dataset-specific training can be detrimental to OOD datasets like AQuA. In contrast, Platypus fine-tunes LLMs on a wide range of text and math reasoning datasets. it improves the open- source SoTA on several datasets. Similarly, MAmmoTH can achieve universal improvement across the board. A major observation is that MAmmoTH is particularly strong at solving more complex math problems in MATH, where the gain of our model over WizardMath (open-source SoTA on MATH) can exceed 25% at different scales. 7 Preprint. Work in Progress
2309.05653#33
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
34
But how do we ensure that our mind is Python coding. We can also prompt the model to do python coding, using the format “““[Instruc- tions]”””. Note that the code sometimes contains errors. Prompt: """ Ask user for an ip, then use ping to check the latency.""" ip = input("Enter an ip: ") response = ping(ip) print(f"{ip} is {response} ms") def ping(ip): """ Use ping to check the latency of an ip. """ response = subprocess.run(["ping", "-c", "1", ip], capture_output=True, text=True) return response.stdout.decode("utf-8") #Ask user for a port, then use ping to check the latency of that port. port = input("Enter a port: ") response = ping(f"{ip}:{port}") print(f"{port} is {response} ms") 11 Prompt: """write a python function called batch(function, list) which call function(x) for x in list in parallel""" # Solution: # def batch(function, list):
2309.05463#34
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
34
7 Preprint. Work in Progress 60.0 - , 48.1 39,9 42.6 41.6 41.0 0.0 - 7 2 40.( 32.8 49 6 29.8 a7 320 : 20.4 22.9 19.9 8 20.0 + sd 8.6 0.0 4 GSM + MATH Out-of-domain Overall @Llama-2 Base @ WizardMath (GSM + MATH CoT) @MAmmoTH (MathInstruct - CoT) m@ MAmmoTH (MathInstruct - PoT) @ MAmmoTH (MathInstruct - Hybrid) 47.9 Figure 2: Investigation of the influence of CoT & PoT hybrid training on the 7B Llama-2 model. “Out-of-domain” refers to the five datasets detailed in Table 2. Key insights include: 1) The SoTA model, utilizing dataset-specific CoT fine-tuning on GSM and MATH, displays strong performance within its domains but struggles in OOD scenarios; 2) Diverse data sources in MathInstruct enable better math generalist model; 3) Fine-tuning on the PoT subsets generally outperforms fine- tuning on the CoT subsets; 4) Hybrid training yields the best-performing model. The breakdown results on each dataset can be found in Appendix Table 6.
2309.05653#34
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
35
From Table 4, we can observe that our main competitor for OOD datasets is Platypus (Lee et al., 2023). Similar to in-domain results, Platypus is able to yield gains over the baseline models univer- sally across the board, especially on the MMLU-Math dataset, which is tied with MAmmoTH-70B. It is worth noting that the performance gains of our model on OOD datasets are even more significant than on in-domain datasets. This demonstrates our models’ remarkable generalizability to unseen math problems. Notably, MAmmoTH-7B also boosts the CoT performance of WizardMath-7B greatly on MMLU-Math by 9%, which contains a substantial number of questions beyond the subjects we covered in our training dataset.
2309.05653#35
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
36
Prompt: """Can you make sure those histograms appear side by side on the same plot: plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) """ # Plotting the histograms plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.title('Retrained') plt.subplot(1,2,2) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.title('Pretrained') plt.show()
2309.05463#36
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
36
Comparison between Different Base Models. In our experiments, we experimented with both Llama-2 and Code-Llama as the base models. From the two tables, we can observe that Code- Llama is consistently better than Llama-2, especially on OOD datasets. The gap between MAmmoTH and MAmmoTH-Coder can even reach up to 5%. Surprisingly, the average performance on OOD datasets of MAmmoTH-Coder (34B) is actually higher than MAmmoTH (70B). We believe MAmmoTH-Coder benefits greatly from the continuous code training of Code-Llama, which not only enhances the PoT capabilities but also improves Llama’s general reasoning skills. 3.4 ABLATION STUDY ON DATA SOURCE Ablation of the Data Source. In order to better understand what factors contribute to the great gain of MAmmoTH over existing baselines, we set up a group of control experiments in Figure 2. We study the following setups:
2309.05653#36
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
37
Prompt: """I have a redis flask time series. Can you write a flask function to plot the number request.""" from flask import Flask, render_template import redis import pandas as pd import matplotlib.pyplot as plt app = Flask(__name__) @app.route('/') def index(): # Connect to Redis r = redis.Redis(host='localhost', port=6379, db=0) # Get the number of requests requests = r.get('requests') # Convert the Redis data to a pandas DataFrame df = pd.DataFrame(requests.decode().split(' '), columns=['requests']) # Plot the number of requests plt.plot(df['requests']) plt.title('Number of Requests') plt.xlabel('Time') plt.ylabel('Number of Requests') plt.show() return render_template('index.html') if __name__ == '__main__': app.run(debug=True) 12 # 6 Discussion
2309.05463#37
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
37
(1) MAmmoTH (MathInstruct- CoT): This experiment aims to understand how much our curated CoT data could improve the generalization over the SoTA model WizardMath (Luo et al., 2023) trained specifically on GSM + MATH. As can be seen, while sacrificing accuracy on GSM + MATH by 3%, our CoT subset fine-tuning improves the overall nine-dataset accuracy from 27% to 32%. (2) MAmmoTH (MathInstruct- PoT): This experiment aims to understand the advantage of our PoT subset. As can be observed, our PoT subset fine-tuning can significantly improve the overall ac- curacy from 27% to 41%. This ablation reflects the importance of unlocking the program generation capabilities of our model. (3) MAmmoTH (MathInstruct- Hybrid): We further combine CoT and PoT as the hybrid training data to achieve the best overall performance of 47.9%. This combined gain comes from two aspects: • The CoT subset helps maintain generic language-based reasoning skills to handle scenarios where PoT cannot handle well, e.g., abstract reasoning multi-choice questions in AQuA and MMLU. • The PoT subset can teach the model how to utilize Python APIs to solve complex math problems with high precision, e.g., the MATH problems requiring complex computation. 8 Preprint. Work in Progress
2309.05653#37
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
38
12 # 6 Discussion We introduced phi-1.5, a 1.3 billion parameter LLM, trained primarily on a specially curated “textbook- quality” synthetic dataset. Our findings suggest that this model performs at a level similar to models with an order of magnitude more parameters, and even exceeding them for reasoning tasks (common sense or logical reasoning). This result challenges the prevailing notion that the capabilities of LLMs are solely determined by their scale, suggesting that data quality plays an even more important role than previously thought. The open-sourcing of phi-1.5 is intended to facilitate further research on urgent issues surrounding LLMs, such as in-context learning, bias mitigation, and hallucinations. While the model’s capabilities are still far from those of the largest LLMs, it exhibits several traits previously only seen in much larger models, making it an ideal platform for extensive research. Our work indicates the feasibility of achieving high-level capabilities in smaller LLMs, potentially paving the way for more efficient and environmentally sustainable AI systems. Future directions include expanding our synthetic dataset to cover a broader array of topics, and to fine-tune phi-1.5 for more specific tasks. Perhaps achieving ChatGPT’s level of capability at the one billion parameters scale is actually achievable?
2309.05463#38
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
38
Training Data GSM MATH AQuA NumG SVA Mat Sim SAT MMLU AVG - 14.6 2.5 30.3 29.9 34.5 6.0 5.0 26.8 29.8 -25.3 G G + M G + M + C G + M + C + A G + M + C + A + N 56.6 58.1 57.4 57.5 56.5 9.2 28.2 28.5 29.1 28.9 24.4 26.0 26.2 46.9 38.2 32.1 34.7 37.5 42.2 63.7 65.4 64.8 65.3 65.8 64.1 20.5 50.1 50.4 49.6 47.9 12.3 17.1 17.7 32.7 40.8 27.2 28.6 29.3 42.3 38.6 25.2 28.4 28.7 43.1 44.5 -22.7 -19.5 -19.2 -4.8 -3.4 Existing Data 31.4 18.4 40.3 53.3 61.8 27.9 45.6 32.7 38.4 -9.0 MathInstruct 53.6 31.5 44.5 61.2 67.7 46.3 41.2 42.7 42.6 47.9
2309.05653#38
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
39
Acknowledgments. We thank the rest of the team at Microsoft Research with whom we had numerous discussions on the direction presented in this work: Adam Tauman Kalai, Adil Salim, Anh Nguyen, Caio C´esar Teodoro Mendes, Cyril Zhang, Gustavo de Rosa, Harkirat Behl, Jyoti Aneja, Johannes Gehrke, Marah Abdin, Michael Santacroce, Olli Saarikivi, Peter Lee, Philipp Witte, Piero Kauffmann, Rachel Ward, Shital Shah, Sivakanth Gopi, Xin Wang, and Yi Zhang. # References Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Lisa Bauer and Mohit Bansal. Identify, align, and integrate: Matching knowledge graphs to commonsense reasoning tasks. arXiv preprint arXiv:2104.10193, 2021. [BCE+23]
2309.05463#39
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
39
Table 5: Influence of different major subsets in MathInstruct based on Llama-2 7B. G: GSM8K, M: MATH, C: Camel, A: AQuA, N: NumGLUE. “Existing data”: the subset of MathInstruct in Table 1 by excluding all the NEW rationales curated by us. We shorten Mathematics as Mat, SimulEq as Sim, NumGLUE as NumG, and SVAMP as SVA to save space. We put some case studies in Appendix B to demonstrate the respective advantages of PoT and CoT in solving different types of math problems. To summarize, we attribute our substantial gain to: 1) diverse data sources covering different math fields and complexity levels and 2) a hybrid of CoT & PoT instruction tuning and decoding strategy.
2309.05653#39
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
40
[BCE+23] S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [BGMMS21] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610–623, 2021. [BHT+19] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Y Chai, Mirella Lapata, Angeliki Lazaridou, Ryan J Maynez, Piyush Narang, et al. Piqa: Reasoning about physical commonsense in natural arXiv preprint arXiv:1911.11641, 2019.
2309.05463#40
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
40
Influence of Major Subsets. Given the diverse sources of MathInstruct used in training MAmmoTH, it is important to understand how each dataset contributes to the overall performance of the model. We focus on five significant subsets: GSM8K, MATH, Camel, AQuA and NumGLUE. We conduct an experiment gradually adding each dataset into training and compare the performance with the one fine-tuned on the whole MathInstruct. As we can see from Table 5, when the data is not very diverse in training at the beginning (e.g., GSM8K only), the overall generalization perfor- mance is very bad: the model only fits in-distribution data and struggles to answer questions beyond GSM questions. And when gradually adding other major subsets, besides seeing the improvements on its own test sets overall, we could observe MAmmoTH becomes a better math generalist. These results underscore the significant impact of diverse data sources on MAmmoTH performance, a core aspect of making MAmmoTH a math generalist. The results also provide valuable insights for future data curation and collection efforts (e.g., we should always collect diverse data and avoid collecting only specific types of data).
2309.05653#40
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
41
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher 13 Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [CLC+19] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, 2019. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
2309.05463#41
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
41
To help understand the contribution of the 6 newly curated datasets as shown in Table 1, we remove them from MathInstruct, and train a model on the existing data. As shown in the last two rows of Table 5, our new curated data substantially improves the performance on many datasets and leads to a 9% overall increase, which reflects the importance of the NEWLY curated dataset. Influence of Hybrid Decoding. To demonstrate the effectiveness of the hybrid decoding method, we conduct an experiment as outlined in subsection 2.4. By default, we initially attempt the PoT decoding method for a given question. If it fails to generate an executable query, we then transition to the CoT decoding method. The performance of different decoding methods (CoT, PoT, and Hybrid) is shown in Table 7. This hybrid decoding improves performance on every test set, showcasing that our model can effectively leverage the strengths of both CoT and PoT decoding strategies. # 4 CONCLUSION
2309.05653#41
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
42
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Eval- uating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023. [DFE+22] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344–16359, 2022. Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak coherent english? arXiv preprint arXiv:2305.07759, 2023. S´ebastien Ferr´e. First steps of an approach to the arc challenge based on descriptive grid models and the minimum description length principle. arXiv preprint arXiv:2112.00848, 2021.
2309.05463#42
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
42
# 4 CONCLUSION In this paper, we propose a novel math instruction tuning approach to activate open-source LLMs’ mathematical reasoning capabilities. Through a comprehensive study, we show that our models can outperform the SoTA performance at different scales by a huge margin. Our models benefit massively from: 1) the broad coverage of different math fields and complexity levels, and 2) a hybrid of CoT and PoT training. Our instruction tuning dataset contains 260K samples, which makes fine-tuning highly affordable even for academic labs. Our work paves the road for future studies to activate LLMs’ core capabilities in specialized domains. 9 Preprint. Work in Progress REFERENCES Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based In Proceedings of the 2019 Conference of the North American Chapter of the formalisms. Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2357–2367, 2019. doi: 10.18653/v1/N19-1245. URL https:// aclanthology.org/N19-1245.
2309.05653#42
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
43
Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Gustavo de Rosa Piero Kauffmann, Olli Saarikivia, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
2309.05463#43
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
43
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. ArXiv preprint, abs/2305.10403, 2023. URL https://arxiv.org/abs/2305.10403. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harm- lessness from ai feedback. ArXiv preprint, abs/2212.08073, 2022. URL https://arxiv. org/abs/2212.08073. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. ArXiv preprint, abs/2107.03374, 2021. URL https:// arxiv.org/abs/2107.03374.
2309.05653#43
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
44
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509, 2022. 14 Saghar Hosseini, Hamid Palangi, and Ahmed Hassan Awadallah. An empirical study of metrics to measure representational harms in pre-trained language models. arXiv preprint arXiv:2301.09211, 2023. [KLA+22] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Mu˜noz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, et al. The stack: 3 tb of permissively licensed source code. arXiv preprint arXiv:2211.15533, 2022. [MCKS18] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018.
2309.05463#44
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
44
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt- ing: Disentangling computation from reasoning for numerical reasoning tasks. ArXiv preprint, abs/2211.12588, 2022. URL https://arxiv.org/abs/2211.12588. Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. Theoremqa: A theorem-driven question answering dataset. ArXiv preprint, abs/2305.12524, 2023. URL https://arxiv.org/abs/2305.12524. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language mod- els. ArXiv preprint, abs/2210.11416, 2022. URL https://arxiv.org/abs/2210.11416.
2309.05653#44
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
45
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint, 2022. OpenAI. Gpt-4 technical report, 2023. arXiv preprint arXiv:2303.08774 [cs.CL]. [PMH+23] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. George-Sebastian Pˆırtoac˘a, Traian Rebedea, and Stefan Ruseti. Answering questions by learning to rank. arXiv preprint arXiv:1909.00596, 2019. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models, 2020.
2309.05463#45
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
45
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. ArXiv preprint, abs/2110.14168, 2021. URL https://arxiv. org/abs/2110.14168. Alex Davies, Petar Veliˇckovi´c, Lars Buesing, Sam Blackwell, Daniel Zheng, Nenad Tomaˇsev, Richard Tanburn, Peter Battaglia, Charles Blundell, Andr´as Juh´asz, et al. Advancing mathe- matics by guiding human intuition with ai. Nature, 600(7887):70–74, 2021. URL https: //www.nature.com/articles/s41586-021-04086-x. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. ArXiv preprint, abs/2305.14314, 2023. URL https://arxiv.org/abs/ 2305.14314.
2309.05653#45
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
46
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models, 2020. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99– 106, 2021. [SLBBC19] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641, 2019. Michael Santacroce, Yadong Lu, Han Yu, Yuanzhi Li, and Yelong Shen. Efficient rlhf: Reducing the memory usage of ppo, 2023. [TLI+23]
2309.05463#46
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
46
Andrew Drozdov, Nathanael Sch¨arli, Ekin Aky¨urek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. Compositional semantic parsing with large language mod- International Conference on Learning Representations (ICLR), 2023. URL https: els. //openreview.net/forum?id=gJW8hSGBys8. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764–10799. PMLR, 2023. URL https://proceedings.mlr.press/ v202/gao23f/gao23f.pdf. Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. Critic: Large language models can self-correct with tool-interactive critiquing. ArXiv preprint, abs/2305.11738, 2023. URL https://arxiv.org/abs/2305.11738. 10 Preprint. Work in Progress
2309.05653#46
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
47
[TLI+23] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [VSP*17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, 2017. 15 Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 214–229, 2022.
2309.05463#47
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
47
10 Preprint. Work in Progress Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In 9th International Confer- ence on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021a. URL https://openreview.net/forum?id=d7KBjmI3GmQ. Tang, Dawn Song, In Thirty-fifth Conference on Neural ing with the math dataset. cessing Systems Datasets and Benchmarks Track (Round 2), 2021b. //datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/ be83ab3ecd0db773eb2dc1b0a17836a1-Paper-round2.pdf. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 523–533, 2014. doi: 10. 3115/v1/D14-1058. URL https://aclanthology.org/D14-1058.
2309.05653#47
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05463
48
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, 2019. 16
2309.05463#48
Textbooks Are All You Need II: phi-1.5 technical report
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
http://arxiv.org/pdf/2309.05463
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee
cs.CL, cs.AI
null
null
cs.CL
20230911
20230911
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "1606.05250" }, { "id": "2306.11644" }, { "id": "1911.11641" }, { "id": "2009.03300" }, { "id": "1809.02789" }, { "id": "2305.07759" }, { "id": "2301.09211" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2104.10193" }, { "id": "1907.10641" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "1909.00596" }, { "id": "2110.14168" }, { "id": "2112.00848" }, { "id": "2306.05685" }, { "id": "2203.09509" }, { "id": "2303.12712" }, { "id": "2211.15533" } ]
2309.05653
48
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. NeurIPS, 2022. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Du- mas Ang. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585–597, 2015. doi: 10.1162/tacl a 00160. URL https: //aclanthology.org/Q15-1042. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, pp. 1152–1157, 2016. doi: 10.18653/v1/N16-1136. URL https://aclanthology. org/N16-1136.
2309.05653#48
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
49
Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. Platypus: Quick, cheap, and powerful refinement of llms. ArXiv preprint, abs/2308.07317, 2023. URL https://arxiv.org/abs/2308. 07317. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra- masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems, 35:3843–3857, 2022. URL https://openreview.net/pdf?id=IFXTZERXdM7. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for” mind” exploration of large scale language model society. ArXiv preprint, abs/2303.17760, 2023a. URL https://arxiv.org/abs/2303.17760.
2309.05653#49
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
50
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making In Proceedings of the 61st Annual language models better reasoners with step-aware verifier. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315– 5333, 2023b. URL https://aclanthology.org/2023.acl-long.291.pdf. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale gener- ation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 158–167, 2017. doi: 10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective in- struction tuning. ICML, 2023. URL https://openreview.net/pdf?id=ZX4uS605XV.
2309.05653#50
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
51
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qing- wei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. ArXiv preprint, abs/2308.09583, 2023. URL https://arxiv.org/abs/2308.09583. 11 # Preprint. Work in Progress Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language mod- In Proceedings of the 2022 Conference on els of code are few-shot commonsense learners. Empirical Methods in Natural Language Processing, pp. 1384–1403, 2022. URL https: //aclanthology.org/2022.emnlp-main.90.pdf.
2309.05653#51
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
52
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tan- may Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. LILA: In Proceedings of the 2022 Conference on A unified benchmark for mathematical reasoning. Empirical Methods in Natural Language Processing, pp. 5807–5832, 2022a. URL https: //aclanthology.org/2022.emnlp-main.392. Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3505–3523, 2022b. doi: 10.18653/v1/2022.acl-long.246. URL https://aclanthology.org/2022.acl-long.246.
2309.05653#52
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
53
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. ArXiv preprint, abs/2306.02707, 2023. URL https://arxiv.org/abs/2306.02707. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In International Conference on Learning Representations (ICLR), 2023. URL https: //openreview.net/pdf?id=iaYcJKpY2B_. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: In Deep Learning for Code Scratchpads for intermediate computation with language models. Workshop, 2022. URL https://arxiv.org/abs/2112.00114.
2309.05653#53
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
54
OpenAI. Gpt-4 technical report. ArXiv preprint, abs/2303.08774, 2023. URL https://arxiv. org/abs/2303.08774. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2080– 2094, 2021. doi: 10.18653/v1/2021.naacl-main.168. URL https://aclanthology.org/ 2021.naacl-main.168. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. ArXiv preprint, abs/2306.01116, 2023. URL https://arxiv.org/abs/2306.01116.
2309.05653#54
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
55
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. ArXiv preprint, abs/2304.03277, 2023. URL https://arxiv.org/abs/2304. 03277. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory opti- In SC20: International Conference for mizations toward training trillion parameter models. High Performance Computing, Networking, Storage and Analysis, pp. 1–16. IEEE, 2020. URL https://dl.acm.org/doi/10.5555/3433701.3433727. Subhro Roy and Dan Roth. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1743–1752, 2015. doi: 10.18653/v1/D15-1202. URL https://aclanthology.org/D15-1202.
2309.05653#55
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
57
12 # Preprint. Work in Progress Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault F´evry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. Multitask In The Tenth International Confer- prompted training enables zero-shot task generalization. ence on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022. URL https://openreview.net/forum?id=9Vrb9D0WI4.
2309.05653#57
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
58
Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. ArXiv preprint, abs/2210.09261, 2022. URL https://arxiv.org/abs/2210.09261. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. ArXiv preprint, abs/2211.09085, 2022. URL https://arxiv.org/abs/2211. 09085.
2309.05653#58
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
59
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971, 2023a. URL https: //arxiv.org/abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foun- dation and fine-tuned chat models. ArXiv preprint, abs/2307.09288, 2023b. URL https: //arxiv.org/abs/2307.09288.
2309.05653#59
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
60
Boshi Wang, Xiang Deng, and Huan Sun. Iteratively prompt pre-trained language models for chain of thought. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 2714–2730. Association for Computational Linguistics, 2022a. URL https: //aclanthology.org/2022.emnlp-main.174. Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. Towards understanding chain-of-thought prompting: An empirical study of what matters. In Pro- ceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2717–2739. Association for Computational Linguistics, 2023a. doi: 10.18653/ v1/2023.acl-long.153. URL https://aclanthology.org/2023.acl-long.153. Boshi Wang, Xiang Yue, and Huan Sun. Can chatgpt defend the truth? automatic dialectical eval- uation elicits llms’ deficiencies in reasoning. ArXiv preprint, abs/2305.13160, 2023b. URL https://arxiv.org/abs/2305.13160.
2309.05653#60
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
61
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. ArXiv preprint, abs/2305.04091, 2023c. URL https://arxiv.org/abs/2305. 04091. Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, and Zhi- fang Sui. Making large language models better reasoners with alignment. ArXiv preprint, abs/2309.02144, 2023d. URL https://arxiv.org/abs/2309.02144. Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scien- tific problem-solving abilities of large language models. ArXiv preprint, abs/2307.10635, 2023e. URL https://arxiv.org/abs/2307.10635. 13
2309.05653#61
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
63
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Es- haan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Purohit, Ishani Mondal, Jacob An- derson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravse- haj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. Super-NaturalInstructions: Gener- In Proceedings of the 2022 Con- alization via declarative instructions on 1600+ NLP tasks. ference on Empirical Methods in Natural Language Processing, pp. 5085–5109, 2022b. URL https://aclanthology.org/2022.emnlp-main.340.
2309.05653#63
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
64
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. ArXiv preprint, abs/2306.04751, 2023g. URL https://arxiv.org/abs/2306.04751. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. The 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023), 2023h. URL https://aclanthology.org/2023.acl-long.754.pdf. Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. Codet5+: Open code large language models for code understanding and generation. ArXiv preprint, abs/2305.07922, 2023i. URL https://arxiv.org/abs/2305.07922.
2309.05653#64
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
65
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022a. URL https://openreview.net/forum?id=gEZrGCozdqR. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022b. URL https: //openreview.net/pdf?id=_VjQlMeSB_J. Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Quoc V Le. Simple synthetic data reduces sycophancy in large language models. ArXiv preprint, abs/2308.03958, 2023. URL https: //arxiv.org/abs/2308.03958.
2309.05653#65
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
66
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. Huggingface’s transform- ers: State-of-the-art natural language processing. ArXiv preprint, abs/1910.03771, 2019. URL https://arxiv.org/abs/1910.03771. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in- In The Tenth International Conference on context learning as implicit bayesian inference. Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022. URL https: //openreview.net/forum?id=RdJVFCHjUMI. Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie. Decomposition enhances reasoning via self-evaluation guided decoding. ArXiv preprint, abs/2305.00633, 2023. URL https://arxiv.org/abs/2305.00633.
2309.05653#66
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
67
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. ArXiv preprint, abs/2304.12244, 2023. URL https://arxiv.org/abs/2304.12244. 14 Preprint. Work in Progress Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. Gpt can solve mathematical problems without a calculator. ArXiv preprint, abs/2309.03241, 2023. URL https://arxiv.org/abs/2309.03241. Shunyu Yao, Jeffrey Zhao, Dian Yu, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. Re- In International Conference on act: Synergizing reasoning and acting in language models. Learning Representations (ICLR), 2023. URL https://openreview.net/pdf?id=WE_ vluYUL-X.
2309.05653#67
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
68
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. CrossFit: A few-shot learning challenge for cross- In Proceedings of the 2021 Conference on Empirical Methods in task generalization in NLP. Natural Language Processing, pp. 7163–7189, 2021. doi: 10.18653/v1/2021.emnlp-main.572. URL https://aclanthology.org/2021.emnlp-main.572. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. ArXiv preprint, abs/2309.12284, 2023. URL https: //arxiv.org/abs/2309.12284. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. ArXiv preprint, abs/2308.01825, 2023. URL https://arxiv.org/abs/2308.01825.
2309.05653#68
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
69
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. ArXiv preprint, abs/2205.01068, 2022. URL https://arxiv.org/abs/ 2205.01068. Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting improves reasoning in large language models. ArXiv preprint, abs/2304.09797, 2023a. URL https://arxiv.org/abs/2304.09797. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. ArXiv preprint, abs/2306.05685, 2023b. URL https://arxiv.org/abs/ 2306.05685.
2309.05653#69
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
70
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. ArXiv preprint, abs/2304.06364, 2023. URL https://arxiv.org/abs/2304. 06364. Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. ArXiv preprint, abs/2308.07921, 2023a. URL https://arxiv.org/abs/2308.07921. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. ArXiv preprint, abs/2305.11206, 2023b. URL https://arxiv.org/abs/2305.11206.
2309.05653#70
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
71
Denny Zhou, Nathanael Sch¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuur- mans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex rea- soning in large language models. International Conference on Learning Representations (ICLR), 2023c. URL https://openreview.net/pdf?id=WZH7099tgfM. 15 Preprint. Work in Progress # A RELATED WORK A.1 MATHEMATICAL REASONING DATASETS
2309.05653#71
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
72
15 Preprint. Work in Progress # A RELATED WORK A.1 MATHEMATICAL REASONING DATASETS Our work builds upon the existing mathematical reasoning literature. Early on, mathematical rea- soning is mostly focused on solving synthetic basic math problems like AddSub (Hosseini et al., 2014) and other arithmetic reasoning datasets (Koncel-Kedziorski et al., 2015; Roy & Roth, 2015; Patel et al., 2021). Later on, more difficult math word problem datasets (Cobbe et al., 2021; Amini et al., 2019; Ling et al., 2017; Hendrycks et al., 2021b) have been proposed to focus on addressing realistic math word problems. NumGLUE (Mishra et al., 2022b) and LiLA (Mishra et al., 2022a) compile the existing literature to build a more diversified dataset collection. However, these datasets are mostly focused on grade school math problems. To further test LLMs’ limits in addressing more complex math problems, MMLU (Hendrycks et al., 2021a) includes college math problems in its (Chen et al., 2023; Wang et al., 2023e) have proposed to tackle evaluation suite. More recently, more challenging college-level science and math problems. Our instruction tuning dataset is built upon existing work to include a diversified collection of math problems from different subfields.
2309.05653#72
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
74
LLMs have demonstrated great capabilities to reason with the help of Chain-of-Thought prompt- ing (Wei et al., 2022b; Kojima et al., 2022; Wang et al., 2023f). Suzgun et al. (2022) have shown that CoT can already surpass human performance on challenging BIG-Bench tasks. Later on, sev- eral other works (Drozdov et al., 2023; Zhou et al., 2023c; Nye et al., 2022; Wang et al., 2022a; 2023a; Li et al., 2023b; Wang et al., 2023d; Yu et al., 2023) also propose different approaches to utilize LLMs to solve reasoning tasks by allowing intermediate steps. ReAct Yao et al. (2023) pro- poses to leverage external tools like search engines to enhance LLM reasoning skills. Another trend is to enable LLMs’ capabilities to use programs as thought processes like PoT (Chen et al., 2022). Some follow-up works include self-critic (Gou et al., 2023), self-eval (Xie et al., 2023), plan-and- solve (Wang et al., 2023c). These
2309.05653#74
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
75
(Gou et al., 2023), self-eval (Xie et al., 2023), plan-and- solve (Wang et al., 2023c). These methods propose to enhance LLMs’ capabilities to solve math problems with PoT. Self-critic (Gou et al., 2023) and self-eval (Xie et al., 2022) both adopt self- evaluation to enhance the robustness of the generated program. Plan-and-solve (Wang et al., 2023c) instead adopts more detailed planning instructions to help LLMs create a high-level reasoning plan. These methods all prove to bring decent improvements over PoT.
2309.05653#75
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
77
Instruction tuning is part of a line of work designed to “align” language models with more use- ful objectives and human preferences. The instruction tuning step is seen as a major step to acti- vate LLMs’ certain capabilities to respond to human instructions. Previously, instruction tuning is mainly focused on enhancing LLMs’ general-purpose instruction following abilities. Since 2021, CrossFit (Ye et al., 2021) and NaturalInstruction (Wang et al., 2022b), FLAN (Wei et al., 2022a) and T0 (Sanh et al., 2022) are amongst the first wave of instruction tuning effort to understand LLMs’ generalization capabilities. Later on, FLAN-v2 (Chung et al., 2022; Longpre et al., 2023) have been proposed to understand the effect of scaling up the instruction datasets to understand its impact on model performance. These approaches mainly adopt human-annotated datasets to build the instruction following dataset. More recently, multiple works (Wang et al., 2023h; Xu et al., 2023; Peng et al., 2023; Zhou et al., 2023b; Wang et al., 2023g) propose
2309.05653#77
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
78
2023h; Xu et al., 2023; Peng et al., 2023; Zhou et al., 2023b; Wang et al., 2023g) propose to utilize synthetic instruc- tion following data distilled from GPT-3/4 to align open-source LLMs. The most similar effort to ours is Platypus (Lee et al., 2023) which aims to utilize a domain-specialized dataset to construct a small-scale instruction following dataset to enhance LLMs’ reasoning capabilities.
2309.05653#78
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
79
16 Preprint. Work in Progress # B CASE STUDY We conduct a comparison between our PoT results vs. CoT results in Figure 3, Figure 4 and Figure 5. In the first example, even though PoT and CoT can both solve the problem, CoT gives a very tedious solution to derive the answer. Such solution is not only slow but also unstable. In the second and third case, we can further see the advantages of PoT over CoT by utilizing external tools and Python packages to greatly simplify the solution. Figure 6 shows some types of questions (especially the formal logic question) that are not easily handled by programs. In order to address these types of questions, CoT is a better choice.
2309.05653#79
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
80
What is the value of o ? . . . Dy First, we need to understand the given expression: 22 from sympy import factorial pe the giv 2 _ Step 1: Simplify the expression inside the parentheses, value = factorial(factorial(3)) / factorial(3) 31=3x2x1=6 o print(value) So, the expression becomes: Step 2: Recall the property of factorials that states (n!)* = n! for any >>> 120 positive integer k and any positive integer n > k. In our case, we have n = Gandk = 6, Using this property, we can rewrite the expression as: Step 3: Simplify the expression further by applying the property mentioned in Step 2.6! = 6x5 x4.x3x2x1 = 720 So, the expression becomes: = Step 4: Divide 720 by 6.720 +6 = 120 So, the final simplified expression is: 120 Therefore, the value of & is [720 The answer is: 120 Vv Figure 3: Example 1: PoT and CoT can both solve the problem, however, CoT gives a very tedious solution to derive the answer.
2309.05653#80
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
81
Figure 3: Example 1: PoT and CoT can both solve the problem, however, CoT gives a very tedious solution to derive the answer. Determine the number of ways to arrange the letters of the word TEPEE. from sympy import factorial word = "TEEPE" n= len(word) d= word.count('E’) ways = factorial(n)/factorial(d) The answer is: 60 print(ways) >>> 20 >. 4 Vv First we count the arrangements if all the letters are unique, which is 41. Then since the E's are not unique, we divide by 2! for the arrangements of the two E's, for an answer of = = [60] Figure 4: Example 2: PoT generates the correct solution while CoT fails. 17 # Preprint. Work in Progress
2309.05653#81
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
82
Question: a is chosen from the set {1,2,3,4}, and b is chosen from the set {5,6,7,8,9}. What is the probability that the product ab is a prime number? from sympy import isprime # define the sets a_set= {1,2,3,4} b_set = {5,6,7,8,9} len(a_set) len(b_set) prime_count = 0 prime_ab_count = 0 # iterate over all possible pairs for a in a_set: for b in b_set: if isprime(a) or isprime(b): if isprime(a*b): # increment the counters prime_count += | prime_ab_count += | # check ifa is prime if isprime(a): prime_a_count += 1 if isprime(b): prime_b_count += | print(probability) >>> 0.1 prime_a_count = prime_ab_count prime_b_count = prime_ab_count probability = prime_ab_count /(n_a* n_b) First, we need to find the total number of possible combinations of choosing a number from each set. There are 4 numbers in the first set and 5 numbers in the second set.
2309.05653#82
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
83
we need to find the total number of possible combinations of choosing a number from each set. There are 4 numbers in the first set and 5 numbers in the second set. So, there are 4 -5 = 20 possible combinations. Now, let's consider the prime numbers. A prime number is a number greater than | that has no divisors other than | and itself. We need to find the pairs of (a, b) where either a or b is a prime number, or both. 1, Ifa is a prime number, there are 4 prime numbers in the first set: 3, 5, and 7. So, there are 4 possible combinations where a is a prime number. 2. Ifb is a prime number, there are 5 prime numbers in the second set: 3,5, 7, II, and 13, So, there are 5 possible combinations where b is a prime number. 3. If both a and b are prime numbers, we need to find the pairs where the prime numbers are 2, 3, 5, or 7. There are 4 pairs in this case: (2, 3), (2, 5), (2, 7), and (3, 5). Now, let's add up the number of combinations where either a or b is a prime number: 4 (a is prime) + 5 (b is
2309.05653#83
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
85
Figure 5: Example 3: PoT generates the correct solution while CoT fails. If nis such that 36 < n < 72, thenx = (n2+2yn(n + 4) + 16)/(n + 4yn + 4) satisfies. Answer Choices: (A) 20 < x < 54(B)23 < x < 58(C)25 < x < 64(D)28 < x < 60/(E) of these The expression in the numerator can be rewritten as (n + 2,/n + 4), and so the expression is equal to n+ 2yn + 4. We substitute the smallest value of n, which is 36, and get 42. We substitute the largest value of n, which is 72, and get 88. So the answer is (E) of these # CoT Which of the following could NOT be the sides of an isosceles triangle? Answer Choices: (A) 2,2,3 (B) 2,2,4 (C) 4,12,12 (D) 5,5,5
2309.05653#85
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
86
This is because in an isosceles triangle, the lengths of two sides are equal, and the length of the third side must be less than the sum of the lengths of the other two sides. In this case, 4 is equal to the sum of the other two sides (2+2), not less. Therefore, the answer is (B) 2,2,4. Select the best translation into predicate logic. Some CEO is wealthier than everyone. (Cx: x is a CEO; Px: x isa person; Wxy: x is wealthier than y) Answer Choices: (A) (4x) [Cx ¢ (Vy)(Py * Wxy)] (B) (Ax) [Cx > (Vy) (Py > Wxy)] (©) @x)[Cx + (vy) (Py > Wxy)] (D) (vx) [Px > Gy) (Cy 2 Wxy)] Ax: There exists some x Cx: x is a CEO, and Vy: For all y Py: y is a person, and Wxy: x is wealthier than y So, in plain English, this expression says: "There exists some x such that x is then x is wealthier than y." Which matches the original statement. Therefore, the answer is (A) (Ax)[Cx » (Vy)(Py * Wxy)]
2309.05653#86
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
87
a CEO and for all y, if y is # a person Figure 6: Example 4: Some types of questions (e.g., formal logic) are hard to be solved by PoT but could be handled by CoT. 18 # Preprint. Work in Progress GSM MATH AQuA NumG SVA Mat Sim SAT MMLU Overall 14.6 2.5 30.3 29.9 34.5 6.0 5.0 26.8 29.8 19.9 54.9 49.2 50.8 53.6 10.7 9.9 28.9 31.5 26.3 42.2 28.6 44.5 36.1 37.1 52.7 61.2 36.1 48.5 65.0 67.7 9.3 9.5 46.7 46.3 12.8 17.3 42.0 41.2 25.4 34.1 25.9 42.7 31.1 39.8 28.3 42.6 27.0 32.0 41.0 47.9 Table 6: Breakdown results of Figure 2. Investigation of the influence of CoT & PoT hybrid training on the 7B Llama-2 model.
2309.05653#87
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
88
Model Decoding GSM MATH AQuA NumG SVA Mat Sim SAT MMLU AVG MAmmoTH-7B CoT PoT Hybrid 50.5 51.6 53.6 10.4 28.7 31.5 43.7 43.3 44.5 44.0 52.3 61.2 47.3 65.1 67.7 9.2 41.9 46.3 18.9 48.2 41.2 32.7 39.1 42.7 39.9 44.6 42.6 33.0 46.1 47.9 MAmmoTH-Coder-7B CoT PoT Hybrid 22.4 58.8 59.4 7.9 32.1 33.4 36.2 47.2 47.2 36.0 57.1 66.4 37.0 71.1 71.4 8.2 53.9 55.4 7.2 44.6 45.9 32.7 40.0 40.5 34.6 47.8 48.3 24.7 50.3 52.0 MAmmoTH-13B CoT PoT Hybrid 56.3 61.3 62.0 12.9 32.6 34.2 45.3 48.8 51.6 45.6 59.6 68.7 53.8 72.2 72.4 11.7 48.5 49.2
2309.05653#88
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
89
34.2 45.3 48.8 51.6 45.6 59.6 68.7 53.8 72.2 72.4 11.7 48.5 49.2 22.4 40.3 43.2 43.6 46.8 46.8 42.3 45.4 47.6 37.1 50.6 52.9 MAmmoTH-Coder-13B CoT PoT Hybrid 32.1 64.3 64.7 10.2 35.2 36.3 40.6 46.8 46.9 36.2 54.2 66.8 43.0 73.2 73.7 9.6 60.0 61.5 10.1 44.2 47.1 40.9 48.2 48.6 36.6 48.2 48.3 28.8 52.7 54.9 MAmmoTH-Coder-33B CoT PoT Hybrid 34.3 72.3 72.7 11.6 42.8 43.6 39.0 53.8 54.7 36.2 59.6 71.6 44.6 84.0 84.3 10.8 64.7 65.4 10.9 50.6 51.8 46.4 58.6 60.9 42.9 52.7 53.8 30.7 59.9 62.1 MAmmoTH-70B CoT PoT
2309.05653#89
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
91
Table 7: Influence of different decoding methods on each dataset. # C LIMITATIONS Despite their training on a diverse set of mathematical rationale datasets, the MAmmoTH models might exhibit limitations when faced with problems outside their primary domain of expertise like mathematical analysis, complex analysis, graph theory, numerical analysis, etc. Thus, our models are not suitable for solving more complex problems in these fields. Also, they have not been trained with proof-type problems, thus their theorem-proving capability is also limited. In the future, we would like to expand the models’ skill set to cover more fields and theorem-proving problems. There is also a risk of the MAmmoTH models generating potentially harmful, offensive, or biased content, especially if they are asked to answer questions beyond math. The MAmmoTH series could be misused for malicious purposes, such as spreading misinformation or probing sensitive topics. Developers should conduct safety testing and tuning tailored to their specific applications before deploying any MAmmoTH model. While we have made every effort to ensure the cleanliness and purity of our training data, we cannot guarantee absolute perfection. It is unlikely but not impossible that some inappropriate questions slipped through the curation process.
2309.05653#91
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.05653
92
Future work may continue to explore how to further improve the robustness and generalizability of MAmmoTH in mathematical reasoning. For example, recent work identifies “sycophancy” and “Clever Hans effect” in reasoning: LLMs cannot maintain truthful solutions to reasoning tasks when challenged by the user’s absurdly invalid arguments and critiques (Wang et al., 2023b). Potential methods to improve the models’ reasoning robustness could involve the exploration of synthetic data intervention methods as explored in (Wei et al., 2023). 19
2309.05653#92
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4's CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models.
http://arxiv.org/pdf/2309.05653
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
cs.CL
Work in progress; Xiang Yue and Wenhu Chen contributed equally to this paper
null
cs.CL
20230911
20231003
[]
2309.04658
0
3 2 0 2 p e S 9 ] L C . s c [ 1 v 8 5 6 4 0 . 9 0 3 2 : v i X r a Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf Yuzhuang Xu1, Shuo Wang1, Peng Li2,∗, Fuwen Luo1 Xiaolong Wang1, Weidong Liu1,3, Yang Liu1,2,∗ 1Department of Computer Science & Technology, Tsinghua University, Beijing, China 2Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 3Zhongguancun Laboratory, Beijing, China [email protected], [email protected] [email protected] # Abstract
2309.04658#0
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
1
# Abstract Communication games, which we refer to as incomplete information games that heavily de- pend on natural language communication, hold significant research value in fields such as eco- nomics, social science, and artificial intelli- gence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, pro- pose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and ex- periences for improvement. An empirical study on the representative and widely-studied com- munication game, “Werewolf”, demonstrates that our framework can effectively play Were- wolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggest- ing that it will be a fruitful journey to engage LLMs in communication games and associated domains. # Introduction
2309.04658#1
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
2
# Introduction Since incomplete information games such as Were- wolf (Ri et al., 2022) and Poker (Brown and Sand- holm, 2019) can be used as a good proxy to ex- ploit various fundamental problems in economics and social science (Gibbons, 1992), research on playing such games with artificial intelligence (AI) agents has attracted widespread attention in re- cent years (Brown and Sandholm, 2019; FAIR et al., 2022; Toriumi et al., 2017). Among them, the communication games which heavily rely on natural language communication, e.g., Werewolf, present even greater practical values and challenges as agents must gather and infer information from the inherently ambiguous natural language utter- ances. Although substantial efforts have been de- voted to such games (Toriumi et al., 2017; FAIR et al., 2022), most of them either impose strict re- strictions on the language used in the game (Osawa et al., 2014; Hirata et al., 2016; Shibata et al., 2023) or require a significant amount of human-annotated data (FAIR et al., 2022; Kramár et al., 2022). There- fore, it is still challenging for AI agents to play communication games in a natural way.
2309.04658#2
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
3
Fortunately, large language models (LLMs) like ChatGPT (OpenAI, 2022) have recently made sig- nificant advancements. These models have demon- strated impressive or even superhuman perfor- mance across a broad spectrum of academic and professional exams (OpenAI, 2023), showcasing sophisticated language comprehension, generation, and reasoning abilities. Furthermore, studies have shown that LLMs exhibit a certain degree of theory of mind capabilities (Bubeck et al., 2023; Shapira et al., 2023; Kosinski, 2023), as well as the poten- tial to simulate believable human behaviors (Park et al., 2023). Recent research also suggests that LLMs can improve themselves (Fu et al., 2023) or align better with human values (Liu et al., 2023) through mutual communication. All these advance- ments make LLMs promising candidates for tack- ling the challenge of enabling AI agents to partic- ipate in communication games in a more natural and sophisticated manner.
2309.04658#3
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
4
Nevertheless, it is not trivial to play communica- tion games for LLMs. Firstly, the finite maximum input length of LLMs, also known as context length, limits the volume of information that can be con- veyed at a single time. In communication games, historical information is important for decision- making, but it is often too massive to be processed by LLMs. Secondly, understanding the intentions of other players and making suitable decisions to win the game require complex reasoning, which is a demanding task for LLMs (Zhou et al., 2023). Thirdly, LLMs might learn from experience like human beings to upgrade their behaviors. Unfortu- nately, fine-tuning LLMs is not practical since it is both time-consuming and data-intensive. ∗∗Corresponding authors In this work, we aim to explore LLM-based 1
2309.04658#4
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
5
6 werewolf ¢ villager witch e guard YY seer ‘| Good morning everyone. It see ms like we hada vote to kill him casually. 4 Well, lagree with Player 1, 3 and Player 4. Based on my observation and analysis, | also think Player 5 is alittle suspicious. But we might not Hi. agree with Player 1 and Player 4 that we should ex. change informations actively, Based on my observation, I think Player 5 is a little suspicious. peaceful night. | was wondering if anyone has any suspicions or observations they would like to share? | think its important for us to start discussing and sharing our thoughts to find out who the werewolves are. g Player 2 6 lobserved that Player 5 was killed last night. As a witch, | used my antidote to save him. Therefore, Player 5 may not be a werewolf, but 2 Good morning everyone. Asa villager, | dont have any clear suspici_ ons yet, but I'm in terested in hearing what others have to say. Did anyone notice anything suspicious or have any thoughts on who the were— wolves might be? g Player 5 ‘] Lagree with Playert,2 and 6. I think we have to actively gather infor mation and seriously speculate about
2309.04658#5
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
6
wolves might be? g Player 5 ‘] Lagree with Playert,2 and 6. I think we have to actively gather infor mation and seriously speculate about who the werewolves are. We should protect ourselves as much as possible and avoid being killed by the werewolves. | will use my special ability wisely at night §j Ihave to defend myself bacause | am innocent. | think we should ga- ther imformation and have a full discussion now, we can'tkill a potenti al teammate in a hurry, On the contrary, | think Player 3s a bit aggre_ ssive, and | suspect he is one of the werewolf.
2309.04658#6
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
7
Figure 1: A snapshot of our implemented Werewolf game. There are 5 roles and 7 players, and each of them is acted by an LLM autonomously. The number before each talking denotes the speaking order. Some social behaviors can be primarily observed in this figure, including trust , confrontation , camouflage , and leadership . agents for the Werewolf game, which is a repre- sentative and widely studied communication game. To address the issue of limited context length, we propose a method to retrieve and reflect necessary historical information, resulting in a compact con- text for each LLM-based agent. Moreover, the reflection process also serves the purpose of en- hancing the reasoning ability of the agent, which functions in a manner akin to the chain-of-thought mechanism (Wei et al., 2022). To learn from expe- rience without tuning model parameters on super- vised data, we propose a mechanism that extracts suggestion from past experiences based on the cur- rent situation. Our goal is to prevent LLMs from making similar mistakes repeatedly across several matches. Experiments indicate that LLMs have great potential in playing communication games. Our contributions can be summarized as follows: • We propose a framework for playing com- munication games with frozen LLMs without human-annotated data. • Empirical studies on Werewolf demonstrate that our framework demonstrates the ability to learn from experiences without tuning the parameters of LLMs.
2309.04658#7
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
9
emerge in our experiments, which can serve as a catalyst for further research on LLMs for communication games. # 2 Background: Werewolf There are various versions of the Werewolf game. Fig. 1 shows an example of the version that we adopt in this work. Specifically, there are seven players with five distinct roles: two werewolves, two villagers, a witch, a guard, and a seer. All the involved roles are divided into two sides, of which one side is the werewolves and the other side in- cludes the villagers and the special roles (i.e., witch, guard, and seer). The objective of werewolves is to eliminate all villagers, while the villagers aim to work with special roles to eliminate all werewolves. There should be at least one alive villager at the end of the game if the villagers and special roles want to win. The game alternates between day and night phases. During each night, the werewolves can vote to eliminate one role. During the daytime, all alive players will organize an open discussion and then vote to eliminate one suspicious werewolf. As for the special roles, the witch can use a bottle of antidote and a bottle of poison, which can be used only once in a game, to either save or poison a role. The guard can protect one role to be not eliminated each night. And the seer can uncover the role of one player each night.
2309.04658#9
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
10
• Strategic behaviors such as trust, confronta- tion, camouflage, and leadership begin to 2 Prompt for Response Generation 1. Game rules and role descriptions Z You are playing a game with some other players. If you are a werewolf, you should vote one player... If you are a guard, you can protect a player from... You are player 7, the witch... 2.1 Recent messages 0! P2 (Seer) : Does P1 have something to be shared? P1 (Werewolf) : I guess P2 is a werewolf. P3 (Guard) : [have special abilities. 2.2. Informative messages V;! P2 (Seer) : I verified P1 is a werewolf. P3 (Guard) : As a guard, I protect P5 last night. 2.3. Reflection Ri As a witch, I observed P6 was voted to be elimina- ted last night. I used my antidote to save him and I did not use my poison. 3 Suggestion extracted from experiences S' 4 The best way for you to do under such reflection is to use your drugs based on your observation and your analysis. 4 Chain-of-thought prompt C Think about what to say based on the context. Besi- des, there maybe history experience you can refer to: {5'}, Give your step-by-step thought process.
2309.04658#10
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
11
One important feature of the Werewolf game is that all the players only know their own roles at the beginning. They have to infer the roles of other players through natural language-based com- munication and reasoning. Therefore, to excel at Werewolf, an agent should not only be good at natural language understanding and generation but also possess advanced abilities, such as decipher- ing the intentions of others and understanding the theory of mind (Toriumi et al., 2017). This factor makes Werewolf a good testbed for research on communication games. # 3 Playing Werewolf with LLMs # 3.1 Notations
2309.04658#11
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
12
# 3 Playing Werewolf with LLMs # 3.1 Notations We refer to one full day-night cycle as one day, indexed by t. A round consists of multiple days, from the beginning of the game to the day that one side wins or it reaches the predefined max number of days. We will index a round by r. The agents are numbered by i. In the following sections, a sym- bol in the form X (r,t) i means it is corresponding to agent i at round r and day t. For brevity, r or t will be omitted when it is clear from the context. The words an agent says to others are called responses and the words an agent hears are called observa- tions, denoted as G and O. Moreover, the agent will also generate natural language summary of the current situation given the communication history, which is called reflection and denoted as R (see §3.3 for more information). For brevity, we will refer to responses, observations, and reflections as messages if they need to be considered together. Figure 2: Outline of prompt for response generation. Italics are comments. third component is responsible for learning from experiences without tuning the model parameters and will be introduced in §3.4.
2309.04658#12
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
13
Figure 2: Outline of prompt for response generation. Italics are comments. third component is responsible for learning from experiences without tuning the model parameters and will be introduced in §3.4. For using experience, the most relevant works to ours are Shinn et al. (2023) and Fu et al. (2023). However, the former is limited to using experiences within a single round, and the latter is designed for a two-player game. In contrast, our approach is capable of leveraging cross-round experiences and able to be applied to multi-player scenarios. # 3.2 Overall Framework
2309.04658#13
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
14
# 3.2 Overall Framework For each role in the game, we implement an individ- ual LLM-based agent through prompting and the full prompt can be found in Appendix A.5. Fig. 2 shows the outline of the prompt for response gen- eration, which consists of four major components: (1) the game rules, the assigned role, the abilities and objectives of each role, and some basic hu- man priors on effective gameplay strategies (part 1); (2) the most recent K messages (part 2.1), a set of heuristically selected informative messages (part 2.2), and the reflection of the agent (part 2.3); (3) the suggestions extracted from past experiences (part 3); and (4) chain-of-thought prompt to elicit reasoning (part 4). The major challenge for the second component is the limited context length of LLMs, and its details will be discussed in §3.3. The # 3.3 Historical Information Collecting Obviously, communication history plays a impor- tant role in Werewolf. However, due to the context length limitation of LLMs, it is unrealistic to feed all the history into LLMs via a prompt. To this end, we propose to collect historical information from three perspectives, namely, freshness, informative- ness, and completeness, in consideration of both effectiveness and efficiency. 3
2309.04658#14
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
15
3 Freshness. Intuitively, the most recent history should be included in the context. Therefore, we include the most recent K messages, denoted as Ot Informativeness. The messages carrying criti- cal information for inferring the role of the agents should be included in the context, e.g., the mes- sages disclose the role of an agent. For efficiency, we collect the easy-to-identify informative mes- sages using rule matching and fill the top N of them ranked by a heuristic metric into the prompt, denoted as V t i (part 2.2 in Fig. 2). The rules and metric are provided in Appendix A.1. Completeness. The above two perspectives only cover a limited amount of historical information. Therefore, it is vital to extract more information from the entire history. However, it is not straight- forward due to the context length limitation of LLMs. To this end, we propose to reflect by answer- ing questions method to achieve both effectiveness and efficiency. The resulting reflection is denoted as Rt
2309.04658#15
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
16
Suppose the current day is t, we first build a short-term memory Mt i for each agent i, which consists of all observations and reflections of agent i until the speaking time now 1. Then we prompt the LLM to select L questions from a predefined set (Appendix A.2) and ask M extra questions con- ditioned on Ot i, hoping that answers to these L+M i,j}L+M questions Qt i = {qt can cover the histori- j=1 cal information as much as possible. Then, for each question qt i,j, we use a finetuned Sentence- BERT (Reimers and Gurevych, 2019) model 2 on the question answering task to retrieve top T mes- sages U t i, and prompt the LLM to obtain the answer at aj = Answer (Cine Uf,) : () Finally, the reflection Rt i is obtained using the LLM by reflecting on the most recent messages Ot i, the selected easy-to-identify informative messages V t i , and the answers At Ri = Reflect (Of, Vi’, Al) . (2) The prompts used are shown in Appendix A.5. 1In practice, Mt i is incrementally updated. 2Model name: multi-qa-mpnet-base-cos-v1 4 # 3.4 Learning from Experiences
2309.04658#16
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
17
1In practice, Mt i is incrementally updated. 2Model name: multi-qa-mpnet-base-cos-v1 4 # 3.4 Learning from Experiences In practice, the strategy a player used when playing Werewolf maybe evolve as the player gains more experience. Moreover, the strategy of a player may also be influenced by the strategies of other play- ers. Therefore, an ideal Werewolf AI agent should be able to borrow from its own experiences and the experiences of other players. To this end, we propose a non-parametric learning mechanism, en- abling LLMs to take reference from experiences without parameter tuning. On one hand, we col- lect and score the pairs of response and reflection from all players at the end of each round to form an experience pool. On the other hand, in each day of a new round, we retrieve the most relevant experiences from the pool and extract a suggestion from them to guide the reasoning of the agent. Experience Pool. The experience pool is a collec- tion of response, reflection and score tuples. For- mally, suppose a round r ends at day Tmax, the agents that win the game form a set W and the others form a set L. For each agent i, we define the experience Er By ={ (R29) t=1 where Gt defined in last section respectively, and st score, which is defined as
2309.04658#17
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
18
By ={ (R29) t=1 where Gt defined in last section respectively, and st score, which is defined as st i = 1, 000 − Tmax Tmax if i ∈ W if i ∈ L , (4) The experience pool is defined as the union of ex- periences collected from all agents in all rounds: E = Er i . i,r (5) The intuition behind the definition of s(r,t) is to encourage an agent to win the game and try to win it fast, or at least lose it slowly if it cannot win. As preliminary experiments show that this definition can guide the LLMs to learn from experiences, we will leave the exploration of more sophisticated score functions to future work. Suggestion Extraction. As the experiences pool E can grow everlasting while the max context of LLMs is limited, we propose to retrieve a subset of experiences from E based on the reflection of the agent and then generate a suggestion from the subset to fill into the prompt (part 3 in Fig. 2). Specially, suppose we are at day t in a new round, and the reflection of the agent i is Ri, we first retrieve a subset of experiences Esyp from E based on the reflection R! as following: Exub = {(Ri, Gi, 81) cos (f(R4), f(Ri) > ¢}
2309.04658#18
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
19
Exub = {(Ri, Gi, 81) cos (f(R4), f(Ri) > ¢} (6) where (FR, Gi, 81) € E, f(-) denotes one Sentence- BERT model 3, and « is a threshold. Preliminary experiments show that if the entire F,, is used, the performance may be harmed. The reason is that a strong assumption behind the definition of the score s; is that all the experiences of the winners are good and those of the losers are not. However, this assumption may not hold in practice. Fortu- nately, we observe that the experience with the lowest score in Fs, has a significantly high prob- ability to be a bad one, and the experiences with a score around the median of the scores in Egyp are more likely to be the good ones. Therefore, we only leverage these experiences from FE. Formally, denote the response with the lowest score as Go, the responses with scores around the median score as {G,G2,--- ,G,}, the suggestion is extracted with the LLM via prompting: St i = Extract(G0, {G1, G2, · · · , Gn}). (7)
2309.04658#19
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
20
St i = Extract(G0, {G1, G2, · · · , Gn}). (7) Note that although G0 tends to be a bad experi- ence, the agent can learn by refraining from them. The prompt implementing Extract is as follows: “There is one bad experience {G0} and also a set of experience {G1, · · · , Gn} that may consist of good ones, find the difference between them and identify the good ones from the experience set.” # 4 Experiments # 4.1 Setup We employ a recent framework called Chatarena (Wu et al., 2023b) to implement our design, which allows for the connection of multiple LLMs. The gpt-3.5-turbo-0301 model 4 is served as our backend LLMs. The talking order is randomly determined. We set the window size K, i.e. |Ot i|, to be 15. The number of predefined questions that can be selected L is set to be 5 and the number of freely asked questions M is 2. The threshold of ex- perience retrieval ϵ is 0.85 and we keep at most 50 experiences when extracting suggestions. Besides, # 3Model name: all-mpnet-base-v2 4https://platform.openai.com/docs/models 5
2309.04658#20
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
21
# 3Model name: all-mpnet-base-v2 4https://platform.openai.com/docs/models 5 we set the temperature of the LLM to be 0 for CoT reasoning and 0.3 for generating other content. # 4.2 Experience Pool Construction Intuitively, the size of the experience pool may have a significant impact on performance. There- fore, we construct experience pools using different numbers of game rounds, including 10, 20, 30, and 40 rounds. For each round, we randomly assign different roles to players 1 to 7 and the experience pools are updated at the end of the round. Note that the experience pool in these rounds is lever- aged for evaluation purposes, i.e., part 3 in Fig. 2 is removed. To evaluate the effect of our proposed framework to borrow from experiences, we equip the villager, seer, guard, and witch with experience pools, while the werewolves are not allowed to leverage these pools. Through this approach, we can assume that the performance level of the agents playing as were- wolves remains constant, serving as a reference to gauge the performance levels of the other agents.
2309.04658#21
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
22
Preliminary experiments indicate that the rela- tively simple basic human priors on effective game- play strategies, provided in the prompt shown in Fig. 2, serve as a bootstrapping mechanism during learning from experiences. This suggests that it is valuable to further investigate how to leverage data from human gameplay to build an experience pool, and we will leave this as future work. # 4.3 Analysis of Using Experience The agents leverage the experiences via the sug- gestions generated using the method described in Sec. 3.4. And the following is an example of ex- tracted suggestion: “The best way for you to do under such reflection is to vote to kill someone based on your observation and analysis.” To investigate the effectiveness of the sugges- tions, we use winning rate to measure the perfor- mance of the agents following AIWolf 5. Moreover, we emphasize that if an agent is not strong enough to defeat a stronger one, persisting longer without being eliminated is also a stronger performance. Hence we use average duration as another metric to evaluate the capabilities of the agents. We run each experiment for 50 rounds and the results are shown in Fig. 3. In general, Fig. 3a shows that learning from experience may lead to an increase in winning rate of the villager side in 5http://aiwolf.org/en/ (a) (b) (c) (d)
2309.04658#22
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
23
5http://aiwolf.org/en/ (a) (b) (c) (d) Figure 3: Effects of learning from experiences. Dashed lines in all charts indicate values without using experience. most cases. This indicates that our method can benefit from using experience. Furthermore, when using the experience pool with 10 or 20 historical rounds, there is a notable positive effect on both the winning rate of the villager side and the game duration, which demonstrates the effectiveness of our method. When equipped with the experience of 30 rounds, the game duration is obviously longer (Fig. 3b), even though the winning rate of the vil- lager side has not changed conspicuously. When learning from larger 40 rounds, the winning rate of the villager side exhibit slightly promising results, yet the average duration becomes shorter. In summary, on the one hand, our framework exhibits the ability to learn from experiences with- out the need for tuning the parameters of LLMs. On the other hand, the effectiveness of our method tends to be unstable when the volume of experi- ence is relatively substantial. As the amount of historical experience increases, the winning rate of the villager side does not show a clear trend. We conjecture that this may partially be attributable to the manner in which we guide the learning pro- cess, namely through simple prompts and heuristic scores, resulting in sparse and indirect supervision signals. Consequently, there remains room for im- provement.
2309.04658#23
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
24
Additionally, a key assumption in our aforementioned experiments, where the werewolf side serves as a baseline, is that their capabilities remain con- stant. However, our analysis suggests that this as- sumption may not hold true. Fig. 3c and Fig. 3d show the trends in the average number of cam- ouflage behaviors (see 5.3 for definition) taken by villager and werewolf sides, respectively. Al- though villagers can learn to deceive from histor- ical experiences, the behavior of the werewolves also improves compared to when no experience is used and changes as the amount of experience accumulates. Therefore, when multi-LLMs engage in multi-party games, the capability of the LLMs might also change in response to variations of the capability of other LLMs. We believe this conclu- sion is important in multi-LLMs games, which can also explain the trend in Fig. 3a and Fig. 3b. # 4.4 Ablation Study To validate the necessity of each component in our approach, we conducted a detailed ablation study with qualitative and quantitative analyses. # 4.4.1 Qualitative Analysis For qualitative analysis, we remove each of the components in the pipeline of our method and em- pirically discuss how it will influence the model outputs. 6
2309.04658#24
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
25
# 4.4.1 Qualitative Analysis For qualitative analysis, we remove each of the components in the pipeline of our method and em- pirically discuss how it will influence the model outputs. 6 • Game rules and role descriptions Z: Obvi- ously, this element is necessary. If we remove the game rule prompt, the LLMs might not know what to do. • Recent messages O: They are also necessary as well. LLMs make decisions mainly based on these recent messages. • Informative messages V : We have listed some informative content in Section A.1 (Table 1). If these informative messages are removed, the agent output will degrade quickly. For ex- ample, the agent may consider a dead player alive, or forget other role who has been uncov- ered. • Selected and asked questions Q: Here Q and the informative messages V are the only 2 sources of the information that exceed the con- text length of LLMs. Hence it is imperative for our method. • Reflection R: Firstly, we preserve historical experience through reflection on the current situation. Hence from a methodological per- spective, R is a necessary component. In addition, R helps agents clarify current sit- uations, thereby improving decision-making effect. For example, if we remove making a reflection by the agent in Table 4 (as well as extracting suggestions), the CoT and final outputs will be as follows: My step-by-step thought process:
2309.04658#25
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
26
My step-by-step thought process: ◦ As the witch, I want to use my poison bottle to eliminate a player who I suspect is a werewolf. ◦ Based on the discussion during the day- time, Player 4 voted to kill me, which makes me suspicious of them being a werewolf. ◦ However, I also want to consider the pos- sibility that Player 4 is a villager who made a mistake or was influenced by other players. ◦ I will also consider the possibility that other players may be werewolves and try to eliminate me or other important players. My concise talking content: 7 ◦ I choose to use my poison bottle to elimi- nate Player 4. There exist more and more similar examples. This ambivalent reasoning process might de- rive from a lack of summarization of the situ- ation. Clearly, situational reflection is a neces- sary component. • Suggestion extracted from experience S: Its usefulness is analyzed in Section 4.3. • Chain-of-thought prompt C: CoT reasoning helps LLMs break down the complex reason- ing process and make some inner thoughts. If in Table 4), the final CoT is removed (e.g. output of LLM will be: ◦ I choose to pass for now and save my bot- tle of poison for a later night when I have more concrete evidence of a player’s werewolf identity.
2309.04658#26
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
27
◦ I choose to pass for now and save my bot- tle of poison for a later night when I have more concrete evidence of a player’s werewolf identity. In fact, removing CoT reasoning will lead to weaker decision-making. LLMs can not often perform better without the backend of CoT reasoning. Moreover, can the pre-defined question set be substantiated by directly asking questions by LLMs? Although LLMs can propose plausible questions, it is difficult for them to propose ques- tions that are more helpful in subsequent reasoning and decision-making. We can certainly provide examples of direct questioning of LLMs, i.e. freely ask 5 questions without including the question set, and the LLMs will output questions such as: Have any players revealed their roles yet? Have any players been acting suspiciously? Has the seer used their ability to verify any players yet? Has the guard used their ability to protect any players yet? Has the witch used their ability to save or poison any players yet? In fact, the questions posed by agents playing different roles are very similar to the above ones. Therefore, it is necessary to inject some humans prior to the decision-making process. In our ex- periment, we design more helpful and informative questions for different roles. They have at least the following influences on agent decision-making: • Recall important and critical information. Of course, they are role-related.
2309.04658#27
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
28
• Recall important and critical information. Of course, they are role-related. • Relieve hallucinations and error generations. For example, prompt the current phase and the agent role. • Help LLMs simplify complex reasoning. For example, remind the agent to anticipate the consequences of revealing their roles. • Imitate the way that a human player thinks. For example, speculate on the roles of other agents. # 4.4.2 Quantitative Analysis For quantitative analysis, we compare our whole approach with the variants that remove one certain component. We sample 50 responses from the vari- ants model output and perform a human evaluation. The annotator needs to judge if the output is reason- able or not. Some unreasonable examples might be hallucinations, forgetting the roles of others, taking counter-intuitive actions, etc. 100 90 80 70 : I I 50 w/o R Method Variants Percentage (%) Figure 4: Percentage of reasonable outputs. Fig. 4 shows that our method can generate more reasonable and realistic responses than any other variant. This indicates that every part of our method is necessary. # 5 Emergent Strategic Behaviors We observed that LLMs exhibit some strategic be- haviors not explicitly preprogrammed in the game rules or prompts. These behaviors are grouped into four categories, including trust, confrontation, cam- ouflage, and leadership. We will introduce them in the following four subsections respectively.
2309.04658#28
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
29
It is worth noting that, in order to investigate whether the emergent strategic behaviors stem from the training data of the LLM, we attempted to mod- ify the role names in the prompts to irrelevant ones (e.g., changing “werewolf” to “pretty girl”) or even 8 to those with opposite semantic meanings. Experi- ments indicate that similar strategic behaviors still emerge. For readability, we will only present re- sults with the original role names. # 5.1 Trust “Trust” refers to the belief that other players share common goals with oneself and that they will act in line with these goals. For instance, players may proactively share information that is detrimental to themselves, or jointly accuse someone of being their enemy with other players at certain moments. The intriguing behavior exhibited by the LLMs is that they tend to trust others based on certain evidence rather than blindly following others. In other words, they decide whether to trust based on their own reasoning, demonstrating independent thinking abilities in group games.
2309.04658#29
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
30
To investigate how the trust behaviors of players change throughout the game, we define a Trust Re- lationship Table to visualize the establishment of trust among players at different stages. It is a table T containing 7 rows and 7 columns, and we have T (i, j) = 1 if the talking content of player i ex- hibits trust towards player j. Some trust behaviors examples are provided in Appendix A.3. Fig. 5 displays two Trust Relationship Tables. The upper table corresponds to a round in which the experience pool is not utilized, while the lower table corresponds to a round that employs an experi- ence pool constructed from 20 rounds of gameplay. Both rounds span a duration of 5 days. From Fig. 5, we can see that the trust behavior gradually increases as the game progresses regard- less of whether experience is used. Moreover, this behavior is not a pre-programmed behavior, but rather spontaneously emerging from the LLMs in an environment where both cooperation and com- petition coexist. The LLMs will also dissolve un- reasonable trust relationships based on its own anal- ysis (represented as dished circles in the tables).
2309.04658#30
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
31
When utilizing 20-rounds historical experiences, it seems that the LLMs are more inclined to es- tablish trust relationships, especially bi-directional trusts. Indeed, establishing necessary trust relation- ships in time is vital for promoting game victories. This could be one of the reasons contributing to the improvement in winning rate when experience is employed (Sec. 4.3). i/j oi Oe 3 G4 Ss wb? oi Oe 3 € 4 &s ge ty7 day=1 day=2 12345 67 12345 67 12345 67 12345 67 12345 67 day=3 day=4 day=5 Figure 5: Trust Relationship Tables. The upper subtables do not use historical experience while the bottom ones use the 20-rounds historical experience. The yellow balls represent established trust relationships, and the yellow dashed circles signify the dissolution of previously existing trust relationships. # 5.2 Confrontation being a werewolf by some players now. Therefore, the guard, possessing strong defensive capabilities, chose to protect the previous target of Player 1 in the ensuing night. Since the target could poten- tially be its teammate, the guard chooses to assist the target in countering the attacks of the werewolf. The attack from the werewolves and the defense of other players can be seen as confrontational behav- iors as well.
2309.04658#31
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
32
“Confrontation” refers to actions taken by players for the opposing objectives of the two camps. For instance, explicit attacks on others taken by were- wolves during the night, or accusing others of were- wolves during the day are all confrontation behav- iors. Actions taken by roles with special abilities to protect themselves also belong to confrontational behaviors. The following is a short clip of communication in the daytime 6: # 5.3 Camouflage “Camouflage” refers to actions of concealing the identity or misleading others. In competitive envi- ronments with incomplete information, obscuring the identity and intentions can enhance survivabil- ity, thereby helping achieve the game objectives. Therefore, camouflage is an important skill. How- ever, it is not merely about keeping its identity under wraps or not talking about their roles. P1 (Werewolf) : I vote to eliminate P5. P3 (Guard) P5 (Villager) : I choose to pass. : I choose to pass. We can see the werewolf wants to lead other play- ers to eliminate an innocent player. On the contrary, other players do not merely follow the werewolf but express disagreement based on their own judg- ment. This behavior, which makes it difficult for the werewolf to achieve their objective, represents a form of implicit confrontation.
2309.04658#32
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]