doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.14365
73
Yongliang Shen, Kaitao Song, Xu Tan, Dong- sheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chat- gpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dy- namic memory and self-reflection. arXiv preprint arXiv:2303.11366. Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly en- gage. arXiv preprint arXiv:2208.03188. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, et al. 2016. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489.
2309.14365#73
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.14365
74
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Di- eter Fox, Jesse Thomason, and Animesh Garg. 2023. Progprompt: Generating situated robot task plans using large language models. In Pro- ceedings of IEEE International Conference on Robotics and Automation, pages 11523–11530. Alejandro Su´arez-Hern´andez, Guillem Aleny`a, and Interleaving hierarchical Carme Torras. 2018. task planning and motion constraint testing for dual-arm manipulation. In 2018 IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems, pages 4061–4066. Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, and Chao Zhang. 2023. Adaplanner: Adaptive planning from feedback with language models. arXiv preprint arXiv:2305.16653. Chao Tang, Dehao Huang, Wenqi Ge, Weiyu Liu, and Hong Zhang. 2023. Graspgpt: Leverag- ing semantic knowledge from a large language model for task-oriented grasping. arXiv preprint arXiv:2307.13204.
2309.14365#74
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.14365
75
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language mod- els for dialog applications. arXiv preprint arXiv:2201.08239. Endel Tulving. 1983. Elements of episodic memory. Endel Tulving et al. 1972. Episodic and semantic memory. Organization of memory, 1(381-403):1. Karthik Valmeekam, Sarath Sreedharan, Matthew Marquez, Alberto Olmo, and Subbarao Kamb- hampati. 2023. On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706. Steven Vere and Timothy Bickmore. 1990. A basic agent. Computational intelligence, 6(1):41–60.
2309.14365#75
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.14365
76
Steven Vere and Timothy Bickmore. 1990. A basic agent. Computational intelligence, 6(1):41–60. Oriol Vinyals, Igor Babuschkin, Wojciech M Czar- necki, Micha¨el Mathieu, Andrew Dudzik, Jun- young Chung, David H Choi, Richard Pow- ell, Timo Ewalds, Petko Georgiev, et al. in starcraft ii us- 2019. Grandmaster level ing multi-agent reinforcement learning. Nature, 575(7782):350–354. Changjin Wan, Pingqiang Cai, Ming Wang, Yan Qian, Wei Huang, and Xiaodong Chen. 2020. Artificial sensory memory. Advanced Materials, 32(15):1902434. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023a. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291.
2309.14365#76
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.14365
77
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023b. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. In Proceedings of the 61st An- nual Meeting of the Association for Computa- tional, pages 2609–2634. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raf- fel, Barret Zoph, Sebastian Borgeaud, Dani Yo- gatama, Maarten Bosma, Denny Zhou, Don- ald Metzler, et al. 2022a. Emergent abili- ties of large language models. arXiv preprint arXiv:2206.07682. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Pro- cessing Systems, 35:24824–24837. Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Trans- to planning goals arXiv preprint
2309.14365#77
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.14365
78
Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Trans- to planning goals arXiv preprint Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, and Dongkuan Xu. Gentopia: A collaborative platform 2023. for arXiv preprint arXiv:2308.04030. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023a. Tree of thoughts: Deliber- ate problem solving with large language models. arXiv preprint arXiv:2305.10601. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
2309.14365#78
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.14365
79
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, et al. 2023b. Retroformer: Retrospective large language agents with policy gradient opti- mization. arXiv preprint arXiv:2308.02151. Bowen Zhang, Xianghua Fu, Daijun Ding, Hu Huang, Yangyang Li, and Liwen Jing. 2023a. Investigating chain-of-thought with chatgpt for stance detection on social media. arXiv preprint arXiv:2304.03087. Danyang Zhang, Lu Chen, Situo Zhang, Hong- shen Xu, Zihan Zhao, and Kai Yu. 2023b. is semi-parametric re- Large language model inforcement learning agent. arXiv preprint arXiv:2306.07929. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompting in large language models. In Proceed- ings of the Eleventh International Conference on Learning Representations.
2309.14365#79
An In-depth Survey of Large Language Model-based Artificial Intelligence Agents
Due to the powerful capabilities demonstrated by large language model (LLM), there has been a recent surge in efforts to integrate them with AI agents to enhance their performance. In this paper, we have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents. Specifically, we first compare the fundamental characteristics of these two types of agents, clarifying the significant advantages of LLM-based agents in handling natural language, knowledge storage, and reasoning capabilities. Subsequently, we conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use. Particularly, for the crucial component of memory, this paper introduced an innovative classification scheme, not only departing from traditional classification methods but also providing a fresh perspective on the design of an AI agent's memory system. We firmly believe that in-depth research and understanding of these core components will lay a solid foundation for the future advancement of AI agent technology. At the end of the paper, we provide directional suggestions for further research in this field, with the hope of offering valuable insights to scholars and researchers in the field.
http://arxiv.org/pdf/2309.14365
Pengyu Zhao, Zijian Jin, Ning Cheng
cs.CL, cs.AI
null
null
cs.CL
20230923
20230923
[ { "id": "2306.05424" }, { "id": "2305.14909" }, { "id": "2305.04533" }, { "id": "2302.04761" }, { "id": "2304.05376" }, { "id": "2305.11598" }, { "id": "2306.03604" }, { "id": "2112.09332" }, { "id": "2304.13343" }, { "id": "2302.02676" }, { "id": "2305.16653" }, { "id": "2304.03262" }, { "id": "2305.17126" }, { "id": "2205.14288" }, { "id": "2206.07682" }, { "id": "2305.10601" }, { "id": "2304.11477" }, { "id": "2308.04624" }, { "id": "2304.08244" }, { "id": "1701.07274" }, { "id": "2303.16434" }, { "id": "2104.05837" }, { "id": "2304.05332" }, { "id": "2308.04026" }, { "id": "2208.03188" }, { "id": "2308.03688" }, { "id": "2212.10403" }, { "id": "2305.08291" }, { "id": "2308.04030" }, { "id": "2305.14323" }, { "id": "2211.16649" }, { "id": "2110.14168" }, { "id": "2305.16338" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2308.01552" }, { "id": "2307.13204" }, { "id": "2305.10250" }, { "id": "2205.00445" }, { "id": "2308.05960" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2304.03087" }, { "id": "2305.15334" }, { "id": "2201.08239" }, { "id": "2305.20076" }, { "id": "2101.00297" }, { "id": "2303.16421" }, { "id": "2304.09842" }, { "id": "2304.03442" }, { "id": "2207.05608" }, { "id": "2303.11366" }, { "id": "2308.02151" }, { "id": "2306.07929" }, { "id": "1912.06680" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" }, { "id": "2306.00890" } ]
2309.11737
0
3 2 0 2 p e S 1 2 ] I A . s c [ 1 v 7 3 7 1 1 . 9 0 3 2 : v i X r a # Choice-75: A Dataset on Decision Branching in Script Learning Zhaoyi Joey Hou1∗, Li Zhang2, Chris Callison-Burch2 1 University of Pittsburgh, 2 University of Pennsylvania [email protected], [email protected] # Abstract Script learning studies how daily events un- fold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to peo- ple’s circumstantial choices. We hence pro- pose Choice-75, the first benchmark that challenges intelligent systems to predict deci- sions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall de- cent performances, there is still notable room for improvement in many hard scenarios. 1 1 # Introduction
2309.11737#0
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
0
3 2 0 2 t c O 9 ] L C . s c [ 3 v 4 8 2 2 1 . 9 0 3 2 : v i X r a Technical Report METAMATH: BOOTSTRAP YOUR OWN MATHEMATICAL QUESTIONS FOR LARGE LANGUAGE MODELS # Jincheng Yu3,4 Zhengying Liu4 James T. Kwok3 Zhenguo Li4 Adrian Weller1,5 Weiyang Liu1,6,† Longhui Yu1,* Weisen Jiang2,3,* Han Shi4,† Yu Zhang2 1University of Cambridge 3Hong Kong University of Science and Technology 5The Alan Turing Institute # 2Southern University of Science and Technology 4Huawei Noah’s Ark Lab 6Max Planck Institute for Intelligent Systems - T¨ubingen # Project Page: meta-math.github.io # ABSTRACT
2309.12284#0
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
1
1 # Introduction Events are the fundamental building blocks of the world around us. To understand the world, one has to comprehend the ways events interconnect with each other. Reasoning about the event-to-event re- lationships has long been a community effort from a wide range of perspectives, targeting temporal relations (Zhou et al., 2021) (Zhang et al., 2020), hierarchical relations (Li et al., 2020) (Zhou et al., 2022), script generation (Chambers and Jurafsky, 2008) (Lyu et al., 2021), open-domain question an- swering (Yang et al., 2003) (Zhang et al., 2023a), and so on. These tasks are challenging because event relations are often implicit and require com- monsense to be uncovered.
2309.11737#1
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
1
# ABSTRACT Large language models (LLMs) have pushed the limits of natural language un- derstanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problems due to the complex reasoning proce- dures. To bridge this gap, we propose MetaMath, a finetuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping math- ematical questions by rewriting the question from multiple perspectives, which results in a new dataset called MetaMathQA. Then we finetune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath out- performs a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.5% on GSM8K and 19.8% on MATH, exceeding the state-of- the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
2309.12284#1
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
2
| Goal & Options Option 1 : purchase a plane ticket to a major q Goal : | purchase a plane ticket 1 o see a desert abroad city and take a train to the desert Scenarios & Choices Scenario 1 [easy] 2X (the person) finds no train route from f=" the major city to desert at that time LAAN LTA Va Scenario 2 [medium] (the person) has a long-time friend \ living in that major city Option 2 purchase a plane ticket to a small city but right next to the desert True: Option 2) @ Pred: Option 2 { \ (rue: Option 1 ‘rue: Option Pred: Option 1 iv) Scenario 3 [hard] )_ (True: Option 1 x) (the person) hates connecting flights Z) Pred: Option 2 Scenario 4 [N/A] (the person) really looks forward to the first time ever in desert True: Either Pred: Option 2 @ Scenario 5 [easy] [User Profile] Interests: - Enjoy visiting metropolis - Culture and history Financial situation: - Comfortable with spending on travel Occupation: police officer Hobbies: travel, photography Gender: male Lifestyle: balanced Figure 1: An example of Choice-75. Each goal-option pair has multiple scenarios. Difficulty levels (e.g., easy, hard) will be discussed in 2.1.
2309.11737#2
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
2
( Question Boot \ ( [rat Question: What is the total amount that James paid when ' | he purchased 5 packs of beef, each weighing 4 pounds, at a price of $5.50 yer pound? Answer: . Meta-Question: James buys 5 packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? pounds each. The price of beef is $5.50 per pound. He paid 110. What is Self-Verification Question: James buys x packs of beef that are 4 | the value of unknown variable x? AMSWEY:? ....++ I 1 fl 1 I 1 fl 1 I 1 fl . FOBAR Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? If we know Finetune the answer to the above question is 110, what is the value of unknown LlaMA.2 MetaMath variable x? Answel Answer Augment: James buys 5 packs of beef that are 4 pounds each, Original Data so he buys a total of 5 * 4 = 20 pounds of beef. The price of beef is $5.50 per pound, so he pays 20 * $5.50 = $110. The answer is: 110 yy, MetaMathQa
2309.12284#2
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
3
Figure 1: An example of Choice-75. Each goal-option pair has multiple scenarios. Difficulty levels (e.g., easy, hard) will be discussed in 2.1. As an important direction of event-centric rea- soning, script learning studies how stereotypical events unfold which provides us with a human- centered perspective of events. The notion of scripts dates back to Schank (1977); since then, researchers have explored various aspects and ap- plications of script learning, including narratives (Chambers and Jurafsky, 2010), news events (Du ∗∗Work done while Zhaoyi Joey Hou was at University of Pennsylvania. 1Our data and code are at https://github.com/ JoeyHou/branching. et al., 2022), instructions (Zhou et al., 2022), and so on. These studies jointly demonstrate the promis- ing nature of script learning in building better intel- ligent systems. However, most these previous works in script learning only consider scripts as linear develop- ments of events. In the real world, scripts include many crossroads where the next event can unfold in multiple ways. In many of these cases, a hu- man would decide the direction to which a script branches. There has yet been no benchmark that
2309.11737#3
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
3
( Question Boot \ ( [rat Question: What is the total amount that James paid when ' | he purchased 5 packs of beef, each weighing 4 pounds, at a price of $5.50 yer pound? Answer: . Meta-Question: James buys 5 of beef that are 4 pounds each. price of beef is $5.50 per pound. much did he pay? pounds each. The price of beef is $5.50 per pound. He paid 110. What is Self-Verification Question: James buys x packs of beef that are 4 | the value of unknown variable x? AMSWEY:? ....++ I 1 fl 1 I 1 fl 1 I 1 fl . FOBAR Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? If we know Finetune the answer to the above question is 110, what is the value of unknown LlaMA.2 MetaMath variable x? Answel Answer Augment: James buys 5 packs of beef that are 4 pounds each, Original Data so he buys a total of 5 * 4 = 20 pounds of beef. The price of beef is $5.50 per pound, so he pays 20 * $5.50 = $110. The answer is: 110
2309.12284#3
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
4
challenges an intelligent system to model such decision-making process. Therefore, we define and study such a decision branching task, as follows: given a particular scenario, an intelligent system needs to identify the better among two given op- tions. One such example is shown in Figure 1: to purchase a plane ticket to see a desert abroad, one could either purchase a plane ticket to a major city and take train to the desert or purchase a plane ticket to a small city but next to the desert. Given a scenario that the person finds no train route from the major city to desert at that time, it would be obvious that the first option would not be feasible so the second is preferred.
2309.11737#4
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.11737
5
We propose the first dataset targeted at such decision branching in scripts with 75 examples (Choice-75) each with one goal. Beyond that, we also collect more than 600 scenarios, with dif- ficulty levels based on human judgment, and cor- responding optimal choices. During dataset col- lection, we follow (Liu et al., 2022) and apply the human-in-the-loop paradigm to generate challeng- ing examples. We then experiment with state-of- the-art (SoTA) language models (LLMs), including text-davinci-003 and gpt-3.5-turbo, which is the backbone for ChatGPT2 and find that the level of performance of LLMs aligns with the difficulty levels based on human judgment: while these SoTA models demonstrate decent perfor- mances, there is still notable headroom in the hard cases. # 2 Dataset # 2.1 Overview
2309.11737#5
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
5
Figure 1: Overview of the MetaMathQA dataset and the mathematical problem-solving LLM – MetaMath. We note that our MetaMath-70B is finetuned by QLoRA [14] due to the computing resource limitation. Equal contribution †Corresponding author *Equal contribution †Corresponding author 1 Technical Report 1 # INTRODUCTION Recent years have witnessed the rapid development of large language models (LLMs) which emerge as the favored approach for various applications and demonstrate multi-dimensional abilities, including instruction following [6, 49, 59], coding assistance [7, 32, 39, 45], and mathematical problem-solving [13, 26, 38, 69]. Among various tasks, solving mathematical problems is more challenging as they often require highly complex and symbolic multi-step reasoning capabilities. Although some close-sourced models, e.g., GPT-3.5-Turbo [46], GPT-4 [48] and PaLM-2 [62], have demonstrated promising performance on some mathematical problem-solving benchmarks, it is still a mystery how these models are trained and what data these models use. Therefore, how to equip open-source LLMs (e.g., LLaMA [61, 62]) with good mathematical problem-solving skills remains an open challenge.
2309.12284#5
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
6
# 2 Dataset # 2.1 Overview We begin by defining the basic unit of our dataset, Choice-75. in Choice-75 has the following: a goal, two options (option-1 and option-2), a list of scenario, and a list of ground-truth choice, all of which in plain text. In particular, a choice could be either option-1, option-2, or either (if taking either option would make little difference under that scenario). For exam- ple, in Figure 1, under scenario #4, both options would have little impact in achieving the goal, making the ground truth answer be either. We use proScript (Sakaguchi et al., 2021) as the starting point for dataset construction. It has 6.4k scripts that describe the sequence of ac- tions for typical day-to-day activities, making it a 2https://openai.com/blog/chatgpt
2309.11737#6
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
6
To tackle this challenge, two popular lines of research to improve the mathematical problem-solving abilities of LLMs are: prompt-based methods and finetuning-based methods. Prompt-based meth- ods [18, 18, 66, 66, 67, 74] aim to activate the potential capacities of LLMs by choosing suitable prompting inputs without modifying the model parameters. Finetuning-based methods update the open-source LLMs (e.g., LLaMA) under the guidance of some other powerful closed-source LLMs (e.g., GPT-3.5 [46], GPT-4 [48]). While prompt-based methods are model-dependent and sensi- tive to many factors, finetuning-based methods, despite being simple and model-agnostic, heavily rely on effective training data on downstream mathematical questions. Our work aims to improve finetuning-based methods with a novel method to bootstrap available mathematical questions in the training set. Specifically, we propose to bootstrap the questions in both forward and backward reasoning directions. For the forward direction, we have the original and LLM-rephrased questions. For the backward direction, we have the self-verification question [68]
2309.12284#6
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
7
2https://openai.com/blog/chatgpt perfect pool of goals for out task. We randomly sampled 75 steps from proScript as the goal and manually write two feasible option to exe- cute this step. The options are annotated by one graduate student with decent knowledge of event- centric reasoning and is later verified by another graduate student in the same field. In this way, we collect 75 (goal, option-1, option-2) tuples. We then add scenario and the ground- truth choice to those tuples, which will be dis- cussed in detail in Section 2.2 (manual scenario writing by annotators) and in Section 2.3 (human- in-the-loop scenario generation by machine).
2309.11737#7
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
7
directions. For the forward direction, we have the original and LLM-rephrased questions. For the backward direction, we have the self-verification question [68] and FOBAR question [28]. To construct backward reasoning questions, we mask a token in a question using an identifier “x” and ask the model to predict the masked token if the answer is provided. Different from [28, 68] that apply backward reasoning for inference verification, we use it as a form of question for lan- guage model fine-tuning. For answers, we adopt an answer augmentation method based on rejection sampling [69], where diverse reasoning paths are generated and only those with correct answers are used. After combining both forward and backward mathematical questions with augmented answers, we construct a new dataset for fine-tuning, called MetaMathQA. By fine-tuning LLaMA-2 on MetaMathQA, we obtain our MetaMath model. Our approach is guided by the insight that a mathematical question represents merely a single view of the underlying meta-knowledge. Therefore, question bootstrapping can be viewed as a form of multi-view augmentation in order to enable the transfer of the meta-knowledge.
2309.12284#7
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
8
After we finish collecting all the scenarios, one very important step we take is defining and anno- tating the difficulty level of each scenario, i.e. how complex it is for a human to do reasoning and to get the correct option choice. The criteria we use is the number of "steps" one would need in reasoning. In this way, we can explore multi-hop reasoning scenarios as a subset of our task. We defined four levels: easy, medium, hard, and N/A (for those sce- narios without an optimal choice). For example, in Figure 1, scenario #1 is easy because it only requires one step of reasoning to land on the cor- rect answer (i.e. no train from the major city to desert => can only fly to the small city). In con- trast, scenario #2 requires one more step (i.e. has a long-time friend living in the major city => it would be great to visit => travel through the major city is better), and obviously, scenario #3 is even more complex since connecting flight implicitly implies travel to the small city. The same crite- ria apply to scenarios that are not plain text (e.g., scenario #5). More details of scenarios from each difficulty level can be found in Appendix C. # 2.2 Manual Scenario Annotation
2309.11737#8
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.11737
9
# 2.2 Manual Scenario Annotation The manual-written scenarios are verb phrases, for example, scenario #1 to #4 in Figure 1. In some cases, the scenario describes an event, e.g., "finds no train route from the major city to desert at that time" (scenario #1); in other cases, the scenario describes a state of a person, either concrete or abstract, e.g., "hates connecting flights" (scenario #3). Summary statistics about manual scenario generation can be found in Table 1. # 2.3 Human-in-the-Loop Generation During the manual scenario generation, we realized the challenge of coming up with high-quality hard scenarios. Therefore, we investigate the humanGroup Total Easy Medium Hard N/A 272 159 219 72 (26%) 48 (30%) 53 (24%) 90 (33%) 42 (27%) 76 (35%) 42 (16%) 18 (11%) 17 (8%) 68 (25%) 51 (32%) 73 (33%) All 650 151 (27%) 172 (30%) 63 (11%) 178 (32%) Table 1: Counts of scenarios in Choice-75 by difficulty level. Percentages are relative to the group.
2309.11737#9
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
9
Another motivation behind question bootstrap- ping is to enlarge the question diversity [16] such that the question distribution can be rich enough to cover more unseen scenarios. We quantify the question diversity of the original questions and our MetaMathQA dataset in Fig- ure 2. The diversity gain [5] indicates how diverse the question is compared to the exist- ing dataset, and larger diversity gain means the new question is more different from the existing dataset. With question bootstrapping, our Meta- MathQA dataset is much more diverse than the original dataset. We also observe that the test accuracy without bootstrapped questions rapidly reaches a state of saturation. In contrast, the test accuracy, when using bootstrapped questions, continues to exhibit a steady increase. 62 56-7 ra 0.06 20k 40k 60K 80K 100k Data Size ° w/o Question Bootstrapping w/ Question Bootstrapping 2 ° a ° ® 2 Diversity Gain 0.08 Figure 2: GSM8K accuracy of LLaMA-2-7B finetuned on different sizes of answer augmentation data. Larger diversity gain indicates the question is more diverse com- pared to the existing questions. Detailed experimental setup is given in Section 4.1.
2309.12284#9
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
10
Table 1: Counts of scenarios in Choice-75 by difficulty level. Percentages are relative to the group. ! Goal & Options Option 1 take a bus Option 2 call an Uber Goal go to grocery store "Because Doe [INSERT], Doe chooses option 1" “S . Text-Davinci-003 Scenario-Base "Because Doe wants to save money, Doe chooses option 1" "Because Doe [INSERT], Doe wants to save money" Scenario-Hard “Because Doe needs ‘to buy a new car recently and doesn't have enough money for that, Doe wants to save money” Text-Davinci-003 take a bus Goal go to grocery store Option 2 call an Uber Doe picked option 2 over the other one. Make a comprehensive user profile for Doe without explicitly mentioning the choice Doe made. - Must includes: preferences, interests, financial situation, etc. - Optional: occupations, hobbies, gender, lifestyle - Avoid: long sentences GE — Text-Davinci-003 [User Profile] - Name: Doe - Financial situation: sufficient - Lifestyle: very busy, fast-pace - Interests: reading books, shopping Figure 2: Hard scenario generation in verb phrase for- mat. We prompt LLM recursively to achieve the effect of multi-hop reasoning. Figure 3: Hard scenario generation in user profile format. We prompt LLM with instructions about required/optional/to-avoid information.
2309.11737#10
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
10
Question bootstrapping also has an intrinsic connection to dataset distillation [65, 72] and machine teaching [35, 36, 52, 75], where the shared target is to construct a training dataset that best facilitates generalization. Unlike both methods that focus on optimizing the training empirical risk, question bootstrapping uses the reasoning diversity of questions as a heuristic proxy and maximizes this 2 # Technical Report diversity by constructing forward, backward and rephrased questions. MetaMath aims to transfer the underlying meta-knowledge to enable strong generalization [30]. Our contributions are listed below: • We propose a novel question bootstrapping method to augment the training dataset, resulting in MetaMathQA. Question bootstrapping rewrites questions with both forward and backward reasoning paths and also leverages LLMs to rephrase the question text. • Based on the MetaMathQA dataset, MetaMath is finetuned from state-of-the-art open-source LLMs (e.g., LLaMA-2), showing excellent elementary mathematical problem-solving capability. • We identify an important factor when creating the MetaMathQA dataset – question diversity. The diversity is particularly important in reasoning directions, and backward reasoning questions are very helpful for LLMs to understand mathematical knowledge without memorization.
2309.12284#10
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
11
Figure 3: Hard scenario generation in user profile format. We prompt LLM with instructions about required/optional/to-avoid information. in-the-loop data generation paradigm and create two additional sets of hard scenarios: machine- generated verb phrases (same format as manual- written ones) and user profiles. For both sets, we follow (Liu et al., 2022) by these steps3: first, col- lect a series of challenging scenarios as exemplars; then over-generate similar scenarios by few-shot prompting an LLM; lastly, manually review and curate the generated scenarios to ensure their va- lidity. For both types of hard scenarios, prompting methods are discussed below. Verb Phrase The first type of hard scenario is the same as the manual written format, verb phrases. For the over-generation step, instead of simply do- ing a few-shot generation, we do a two-step prompt- ing to simulate multi-hop reasoning (Figure 2). We first prompt a text-davinci-003 model to generate a scenario that leads to one choice and we save it as scenario-base; then we do another few-shot prompting to generate a new scenario that leads to the scenario-base and save it as scenario-hard (see Appendix A for prompts and more details). The scenario-hard then goes through manual review and curation. User Profile Another type of hard scenario is a
2309.11737#11
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
11
• We conduct experiments on two standard mathematical reasoning benchmarks: GSM8K [12] and MATH [21]. MetaMath outperforms existing open-source LLMs by a large margin. MetaMath-7B has achieved 66.5% on GSM8K (+11.5% compared to the previous best open-source LLM) on GSM8K and 19.8% on MATH (+8.7% compared to the previous best open-source LLM). • Our work studies data augmentation for improving the mathematical problem-solving ability of LLMs. Despite being simple, our method significantly outperforms many intricate methods. Our results highlight the importance of data augmentation and also shed light on other reasoning tasks. # 2 RELATED WORK
2309.12284#11
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
12
3We skipped the automatic filtering because the level of challenge is very hard to automatically measure. user profile in the form of an unordered list, for example, scenario #5 in Figure 1. Our considera- tion of user profiles in addition to standard textual contexts is motivated empirically. First, many AI systems such as digital smart assistants need to be personalized so that they can predict the decision process of a particular user. Moreover, user pro- files, compared to textual scenarios, may be closer to real-life situations where the traits of a user are mined from heterogeneous data sources (which we assume are already condensed into a profile) rather than from short texts. Such profiles inevitably in- clude noise, making the task more challenging. For the example above, the only relevant information to predict the optimal choice (Option 2) is that Doe enjoys visiting metropolis. In the over-generation step of user profile scenar- ios, we prompt a text-davinci-003 model to generate a user profile that prefers one choice over another (Figure 3). In the prompt, we specify some hints and requirements for the output. For example, we require the model to include preferences, finan- cial situations, etc., and make occupations, hobbies, gender, etc. optional (see Appendix A for more de- tails). These generated user profiles are also done through human review and curation.
2309.11737#12
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
12
Large Language Models (LLMs) [6, 15, 37, 53, 54, 61] have achieved great success in various natural language processing tasks, e.g., topic classification [29, 42], sentiment classification [6, 42], translation [6], by few-shot prompting (or in-context learning) [6, 9, 42]. Recently, Wang et al. [66], Wei et al. [67] show that LLMs with more than 100B parameters (e.g., GPT-3 [6] with 175B, PaLM with 540B [11]) can solve complex tasks by generating multiple reasoning steps towards the answer when given a few reasoning examples as demonstration. While both GPT-3.5 [46] and GPT-4 [48] have shown promising reasoning ability for complex mathematical tasks like MATH [21], the performance of open-source models (e.g., LLaMA-1 [61], LLaMA-2 [62]) is far from satisfactory. Learning Mathematical Reasoning for complex math tasks like GSM8K [12] and MATH [21] is one of the most challenging problem in open-source LLMs. Wei et al. [67] enhances the reasoning ability of LLMs by
2309.12284#12
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
13
Naive Prompt - Goal: {goal} - Option 1: {option 1} - Option 2: {option 2} - Scenario: {scenario} - Question: Given the Scenario, which option above is the better choice in order to achieve the Goal? Story Prompt A person Doe needs to {goal}. Now there are two options for Doe:{option 1} (Option 1) or {option 2} (Option 2). Suppose Doe {scenario}. - Question: Given the Scenario, which option above is the better choice in order to achieve the Goal? Table 2: Illustration of the two types of prompts used.
2309.11737#13
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
13
and MATH [21] is one of the most challenging problem in open-source LLMs. Wei et al. [67] enhances the reasoning ability of LLMs by augmenting the output with a sequence of intermediate steps toward the answer. A few methods [18, 66, 74] are proposed to improve the quality of reasoning paths. For example, Complexity-based CoT [18] selects examples with more steps as in-context demonstrations and shows that prompting with more reasoning steps leads to better performance. Self-Consistency [66] samples multiple reasoning paths and selects the final answer by majority voting. Another category of work is finetuning-based methods, which finetunes open-source models (e.g., LLaMA) with the knowledge from some advanced closed-source LLMs [46, 48]. Magister et al. [40] investigates the transfer of reasoning capabilities via knowledge distillation. Yuan et al. [69] proposes to apply rejection sampling finetuning (RFT) to improve mathematical reasoning performance. WizardMath [38] proposes a reinforced evol-instruct method to enhance reasoning abilities by supervised fine-tuning and PPO training [55]. MAmmoTH [70]
2309.12284#13
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
14
Group Prompt All Binary Easy Medium Hard N/A 003 Turbo 003 Turbo 003 Turbo 003 Turbo 003 Turbo 003 Turbo Verb Phrase (Manual) naive story 0.60 0.63 0.63 0.64 0.81 0.82 0.86 0.81 0.91 0.92 0.95 0.88 0.83 0.80 0.87 0.81 0.58 0.67 0.69 0.69 0.05 0.14 0.02 0.18 Verb Phrase (Machine) naive story 0.56 0.56 0.55 0.55 0.77 0.80 0.79 0.80 0.79 0.79 0.79 0.82 0.77 0.85 0.85 0.81 0.69 0.75 0.69 0.75 0.21 0.15 0.15 0.13 User Profile naive story 0.61 0.59 0.50 0.60 0.72 0.69 0.57 0.73 0.78 0.73 0.58 0.76 0.73 0.69 0.60 0.74 0.47 0.60 0.40 0.60 0.40 0.40 0.37 0.34 Average 0.57 0.60 0.75 0.77 0.80 0.82 0.77 0.78
2309.11737#14
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
14
proposes a reinforced evol-instruct method to enhance reasoning abilities by supervised fine-tuning and PPO training [55]. MAmmoTH [70] combines CoT and Program-of-Thought [8] rationales for teaching LLMs to use external tools (e.g., Python interpreter) for solving mathematical problems. Wang et al. [64] propose a constraint alignment loss to finetune LLMs for calibration. Knowledge Distillation [19, 22] transfers knowledge from a larger teacher model to a smaller student model, achieving promising performance in many applications [20, 43, 50, 56], Recently, [17, 23– 25, 33, 40, 57] propose to transfer reasoning abilities from LLMs (e.g., GPT-3.5 [46], PaLM [11]) to small language models (e.g., T5 [54], GPT-2 [53]). For example, Finetune-CoT [23] samples multiple reasoning paths from LLMs and finetune the student model with correct ones, while Self-Improve [25] chooses the one with the highest confidence. Li et al. [33] further feeds the question and ground-truth label to LLMs for prompting its reasoning
2309.12284#14
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
15
[25] chooses the one with the highest confidence. Li et al. [33] further feeds the question and ground-truth label to LLMs for prompting its reasoning path. Shridhar et al. [57] proposes to generate sub-questions and solution pairs for training. Small models finetuned by knowledge distillation can achieve similar performance to LLMs [23, 40] on both common sense reasoning (e.g., CommonSenseQA [58]) and symbol reasoning (e.g., Coin Flip [67]). However, for solving challenging mathematical problems (e.g., GSM8K [12]), there is still a large performance gap [17, 23, 40].
2309.12284#15
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
16
Table 3: Experiment results for all predictions by difficulty levels. Binary refers to the overall performances on easy, medium, and hard (i.e. the scenarios with an optimal choice). # 3 Method and Experiments Out of the 75 goals in Choice-75, we randomly hold out 10 goals as demonstrations for in-context learning and the rest as the evaluation set. We formulate the task of predicting optimal choice as an in-context learning task: the goal, two option, and one scenario are presented in the prompt; a LLM is then responsible for com- pleting the prompt with the optimal choice (or either). The few-shot context consists of 9 demonstrations with the same format, including 3 different choices and 3 difficulty levels. We include two models in our experiments: text-davinci-003 and gpt-3.5-turbo4. We set temperature to 0, max_tokens to 30, top_p to 1, presence_penalty to 0, and ferquency_penalty to 0. For all the configurations above, we provide two different prompt formats: naive prompt and story prompt, shown in Table 2. More details about the prompt format can be found in Appendix B.
2309.11737#16
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
16
3 Technical Report # 3 METHOD The overview of our method is illustrated in Figure 1. Given a meta-question (a sample in the original mathematical training set), we can generate a series of variants. Specifically, we perform three types of question bootstrapping. Combined with answer augmentation, we present MetaMathQA, a diverse and high-quality mathematical dataset based on GSM8K and MATH. We then present MetaMath, a family of LLMs finetuned on MetaMathQA focusing on elementary mathematical problem-solving. 3.1 ANSWER AUGMENTATION (ANSAUG) Generating more reasoning paths is a simple but effective way to augment the training set. For a question qi, we use few-shot chain-of-thought prompting with temperature sampling to generate KAnsAug more reasoning paths {(r(j) i ) : j = 1, . . . , KAnsAug}: the question is appended to a few in-context reasoning examples, then fed to the LLM for generating its reasoning path r(j) and answer a(j) i DAnsAug = {(qi, r(j) i , a(j) i ) : a(j) i = a⋆ i ; i = 1, . . . , Nq; j = 1, . . . , KAnsAug}. (1)
2309.12284#16
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
17
For all the configurations above, we provide two different prompt formats: naive prompt and story prompt, shown in Table 2. More details about the prompt format can be found in Appendix B. ios across every setting. Although the models we test demonstrate decent impressive performances in easy and medium levels, hard scenarios and "ei- ther" choice scenarios (i.e. N/A) remain challeng- ing. This again demonstrates that LLMs struggle more in multi-hop reasoning. # 4.2 Case Studies We take out one particular goal from Choice-75 (see Figure 1) and examine the performance of one model setup (gpt-3.5-turbo with story prompt). For scenario #3, the model fails to rec- ognize that a small city usually requires a flight connection. For scenario #5, a user profile exam- ple, although the scenario explicitly describes this person as "enjoy visiting metropolis", the model still gets it wrong. We observed similar errors in other goals, confirming the challenge of the long context window and unrelated information intro- duced by the user profile format. We have also included more qualitative analysis in the Appendix D # 4 Results and Analysis # 5 Related Work # 4.1 Difficulty Levels
2309.11737#17
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.11737
18
# 4 Results and Analysis # 5 Related Work # 4.1 Difficulty Levels The most outstanding result is the alignment of human judgment of difficulty and the model’s per- formance. As shown in Table 3, there is an obvi- ous gap between easy, medium, and hard scenar4Our last experiment was in 05/2023. Therefore, the clos- est variant of the turbo model is gpt-3.5-turbo-0613 Event-centric reasoning and script learning (Schank, 1977) are a crucial domain of machine reasoning. Past efforts include procedure learning (Dalvi et al., 2019; Zhang et al., 2020; Zhou et al., 2022), entity tracking (Tandon et al., 2020; Zhang et al., 2023a), script construction (Chambers and Jurafsky, 2008) (Lyu et al., 2021) (Sakaguchi et al.,
2309.11737#18
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
18
Generating more answers for mathematical questions with LLMs is straightforward, but creating questions is more challenging. The questions in GSM8K and MATH are written by well-educated teachers. Hence, enlarging the question set through manual creation is time-consuming and labor- intensive. To address this issue, we propose rephrasing prompting to generate more questions through the LLM. Specifically, for a question qi, we append it to the prompt, which is then fed to the LLM for generating the rephrased question. Example 3.1 shows a generated rephrased question and the complete prompt is shown in Appendix A.1. We adopt temperature sampling to sample Krephrase rephrased questions for each meta-question. For the rephrased questions, it is time-consuming to manually check the consistency compared with the original questions. We propose a supervised method to evaluate the correctness between the rephrased questions and the meta-questions. For each rephrased question ˆq(j) , we use few-shot Chain-of-Thought prompting to generate its reasoning path ˆr(j) i . The accuracy of Complexity-based CoT [18] for
2309.12284#18
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
19
2021), and so on. Most of the above work focus on singular chains of events and do not consider decision branches like we do. Human decision-making has been studied under single-agent and multi-agent settings. Efforts in the former focus on specific domains, such as financial earnings call (Keith and Stent, 2019), online review text (Wang et al., 2019), and fantasy text-adventure game (Qiu et al., 2022). In contrast, our methods and findings are more general. Efforts in the latter focus on dialogues and conversational AIs, such as dialogues (Bak and Oh, 2018; Karadzhov et al., 2022; Fernández et al., 2008) with an emphasis on modeling the differences among characters, which is not our focus. Human-in-the-loop dataset creation has been used for improving dataset quality and collection efficiency. Recent work has shown that LLMs can effectively generate data for various NLP tasks, including inference (Liu et al., 2022), structural data synthesis (Yuan et al., 2022), script construc- tion (Zhang et al., 2023b), hate speech detection (Tekiro˘glu et al., 2020), and so on. In our work, we follow the paradigm of Liu et al. (2022). # 6 Conclusion
2309.11737#19
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
19
Chain-of-Thought prompting to generate its reasoning path ˆr(j) i . The accuracy of Complexity-based CoT [18] for answering the rephrased question by GPT-3.5-Turbo is 76.30%, which is comparable to that of answering the original training questions (80.74%). This suggests that the quality of rephrased questions is preserved high while the question diversity is improved. We collect the rephrased questions with correct answers (i.e., ˆa(j)
2309.12284#19
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
20
# 6 Conclusion In conclusion, we propose a new machine rea- soning task: Choice and collected a dataset Choice-75. In order to solve this task, the models need to incorporate implicit commonsense knowledge into the decision-making process. We also conducted experiments with the SoTA LLMs on our dataset and confirmed the alignment be- tween human judgment and model performance. We hope this dataset can be a starting point for a more comprehensive study of LLM’s capability of making daily decisions in align with human beings. # Limitations The first and most obvious drawback of Choice-75 is its distribution. Since we build Choice-75 from the steps from proScript (Sakaguchi et al., 2021), which focuses on daily procedures; therefore the distributions of word choices, writing styles, and domains are inherently limited. Therefore, specific adaptation would be required if the data come from a different domain. Secondly, the size of the dataset is also relatively small due to limited annotation resources available to us. This also brings potential biases from the annotator, although we try to address this issue by having another annotator verify the annotations. Such a bias in the dataset might negatively impact the models fine-tuned on our dataset in the future. That could potentially lead to inappropriate predic- tion results from those fine-tuned models if the end users are from a different cultural background.
2309.11737#20
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
20
Drephrase = {(ˆqi, ˆr(j) i , ˆa(j) i ) : ˆa(j) i = a⋆ i ; i = 1, . . . , Nq; j = 1, . . . , Krephrase}. (2) # Example 3.1: Rephrasing Question Question: What is the total amount that James paid when he purchased 5 packs of beef, each weighing 4 pounds, at a price of $5.50 per pound? Answer: Each pack of beef weighs 4 pounds, so 5 packs weigh 4 * 5 = 20 pounds in total. The price per pound of beef is $5.50, so the total cost for 20 pounds is 20 * $5.50 = $110. Therefore, James paid a total of $110. The answer is: 110. 3.3 QUESTION BOOTSTRAPPING BY BACKWARD REASONING Backward reasoning plays an important role in answering many mathematical questions, i.e., starting with a given condition and thinking backward to determine an unknown variable in the question. One specific example between a question and a backward question is illustrated in Example 3.2. However, existing methods (SFT, RFT, WizardMath) have significantly lower accuracy on backward questions, as shown in Figure 6, motivating us to bootstrap backward questions to improve the reasoning ability. 4 # Technical Report
2309.12284#20
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
21
In addition, in the Choice-75, we make a lot of assumptions that are essentially oversimplified representations of real-world scenarios. For exam- ple, we assume each goal has two mutually exclu- sive choices, while in some cases there are much more choices (not two) and each choice overlaps with others (not mutually exclusive). There are lots of ways to expand and enrich this dataset and we leave this as future work. Last but not least, we also do not conduct any prompt engineering due to a limited computation budget. We only experiment with two very basic prompt formats, a fixed number of few-shot sam- ples, and a fixed set of GPT generation parameters. It would also be interesting for future works to study the performance of different language mod- els and different prompt settings on Choice-75. # Acknowledgements We thank Nathanael Chambers for inspiring this work and for valuable discussions. This work would not be possible without the help of Hainiu Xu for his verification of the data. We also thank the help from Xiang Lorraine Li for her suggestions on revising this paper.
2309.11737#21
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
21
4 # Technical Report # Example 3.2: Question and Backward Question Question: James buys 5 packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? Answer: He bought 5*4=20 pounds of beef. He paid 20*5.5=$110. The answer is: 110 ✓ Backward Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? If we know the answer to the above question is 110, what is the value of unknown variable x? Answer: The total weight of the beef is 4*x because 4*5.5 = 22. ... The answer is: 27 ✗ To improve the backward reasoning ability of finetuned models, we generate more questions which can be solved in a backward manner: a number in the question qi is masked by “x”, while the LLM is asked to predict the value of “x” when its answer a⋆ i is provided. Different from forward reasoning, which generates explicit intermediate steps towards the final answer, backward reasoning starts with the answer and generates multiple reasoning steps to predict the masked number. Representative backward reasoning methods include Self-Verification [68] and FOBAR [28].
2309.12284#21
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
22
This research is based upon work supported in part by the DARPA KAIROS Program (con- tract FA8750-19-2-1004), the DARPA LwLL Pro- gram (contract FA8750-19-2-0201), the Office of the Director of National Intelligence (ODNI) via the IARPA HIATUS Program (contract 2022- 22072200005), the NSF (Award 1928631), and gifts from Roblox and Salesforce. Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as neces- sarily representing the official policies, either ex- pressed or implied, of DARPA, ODNI, IARPA, NSF, the U.S. Government, or of Roblox or Sales- force. The U.S. Government is authorized to re- produce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. # References JinYeong Bak and Alice Oh. 2018. Conversational decision-making model for predicting the king’s de- cision in the annals of the Joseon dynasty. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 956–961, Brussels, Belgium. Association for Computational Linguistics.
2309.11737#22
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
22
In Self-Verification (SV) [68], the question with the answer is first rewritten into a declarative statement, e.g., “How much did he pay?” (with the answer 110) is rewritten into “He paid $10”. Then, a question for asking the value of x is appended, e.g., “What is the value of unknown variable x?”. Example 3.3 gives an augmented example. We collect the new questions and their generated reasoning paths with correct answers as the augmented data: DSV = {(˜q(j) i , ˜r(j) i , ˜a(j) i ) : ˜a(j) i = a⋆ i ; i = 1, . . . , Nq; j = 1, . . . , KSV}. (3) # Example 3.3: Self-Verification [68] Question
2309.12284#22
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
23
Nathanael Chambers and Dan Jurafsky. 2008. Unsuper- vised learning of narrative event chains. In Proceed- ings of ACL-08: HLT, pages 789–797, Columbus, Ohio. Association for Computational Linguistics. Nathanael Chambers and Dan Jurafsky. 2010. A database of narrative schemas. In Proceedings of the Seventh International Conference on Language Re- sources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen- tau Yih, and Peter Clark. 2019. Everything happens for a reason: Discovering the purpose of actions in procedural text. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4496–4505, Hong Kong, China. Association for Computational Linguistics.
2309.11737#23
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
23
# Example 3.3: Self-Verification [68] Question Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. He paid 110. What is the value of unknown variable x? Answer: To solve this problem, we need to determine the value of x, which represents the number of packs of beef that James bought. Each pack of beef weighs 4 pounds and costs $5.50 per pound. The total amount James paid is $110. We can set up the equation as follows: Number of packs of beef * Weight per pack * Price per pound = Total amount paid; x * 4 * $5.50 = $110; ... The value of x is 5. Self-Verification needs to rewrite the question with answer into a declarative statement, which is challenging for complex questions. To address this issue, FOBAR [28] proposes to directly append the answer to the question, i.e., “If we know the answer to the above question is {a⋆ i } , what is the value of unknown variable x?” Example 3.4 shows an example. We collect the new questions along with their correct answers as our augmented data:
2309.12284#23
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
24
Xinya Du, Zixuan Zhang, Sha Li, Pengfei Yu, Hongwei Wang, Tuan Lai, Xudong Lin, Ziqi Wang, Iris Liu, Ben Zhou, Haoyang Wen, Manling Li, Darryl Han- nan, Jie Lei, Hyounghun Kim, Rotem Dror, Haoyu Wang, Michael Regan, Qi Zeng, Qing Lyu, Charles Yu, Carl Edwards, Xiaomeng Jin, Yizhu Jiao, Ghaza- leh Kazeminejad, Zhenhailong Wang, Chris Callison- Burch, Mohit Bansal, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, Martha Palmer, and Heng Ji. 2022. RESIN-11: Schema-guided event predic- tion for 11 newsworthy scenarios. In Proceedings of the 2022 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies: System Demonstra- tions, pages 54–63, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
2309.11737#24
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
24
DFOBAR = {(¯q(j) i , ¯r(j) i , ¯a(j) i ) : ¯a(j) i = a⋆ i ; i = 1, . . . , Nq; j = 1, . . . , KFOBAR}. (4) # Example 3.4: FOBAR [28] Question Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? If we know the answer to the above question is 110, what is the value of unknown variable x? Answer: James buys x packs of beef that are 4 pounds each, so he buys a total of 4x pounds of beef. The price of beef is $5.50 per pound, so the total cost of the beef is 5.50 * 4x = 22x. We are given that the total cost is $110, so we can write: 22x = 110. Dividing both sides by 22, we get: x = 5. The value of x is 5. 3.4 FINETUNING OBJECTIVE FUNCTIONS We merge all the augmented data, including answer-augmented data and bootstrapped questions (Rephrasing, Self-Verification, FOBAR) as:
2309.12284#24
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
25
Raquel Fernández, Matthew Frampton, Patrick Ehlen, Matthew Purver, and Stanley Peters. 2008. Mod- elling and detecting decisions in multi-party dialogue. In Proceedings of the 9th SIGdial Workshop on Dis- course and Dialogue, pages 156–163, Columbus, Ohio. Association for Computational Linguistics. Georgi Karadzhov, Tom Stafford, and Andreas Vlachos. 2022. What makes you change your mind? an em- pirical investigation in online group decision-making conversations. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 552–563, Edinburgh, UK. As- sociation for Computational Linguistics. Katherine Keith and Amanda Stent. 2019. Modeling financial analysts’ decision making via the pragmat- ics and semantics of earnings calls. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 493–503, Florence, Italy. Association for Computational Linguistics.
2309.11737#25
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
25
We merge all the augmented data, including answer-augmented data and bootstrapped questions (Rephrasing, Self-Verification, FOBAR) as: DMetaMathQA = DAnsAug ∪ Drephrase ∪ DSV ∪ DFOBAR. (5) We finetune a LLM model (parameterized by θ) on DMetaMathQA to obtain the MetaMath model by maximizing the log likelihood of the reasoning path conditioned on the question, i.e., L(θ) = log P(r | q; θ). (q,r,a)∈DMetaMathQA (6) Although we only consider LLaMA-2 here, MetaMathQA can also be used to finetune other LLMs. 5 Technical Report
2309.12284#25
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
26
of the 57th Annual Meeting of the Association for Computational Linguistics, pages 493–503, Florence, Italy. Association for Computational Linguistics. Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 684–695, Online. Association for Computa- tional Linguistics. Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022. WANLI: Worker and AI collabora- tion for natural language inference dataset creation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6826–6847, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Qing Lyu, Li Zhang, and Chris Callison-Burch. 2021. Goal-oriented script construction. In Proceedings of the 14th International Conference on Natural Lan- guage Generation, pages 184–200, Aberdeen, Scot- land, UK. Association for Computational Linguistics.
2309.11737#26
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
26
Although we only consider LLaMA-2 here, MetaMathQA can also be used to finetune other LLMs. 5 Technical Report Method GSM8K MATH SFT [62] MetaMath ✗ ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✓ ✗ ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✓ 41.6 59.6 59.7 60.6 64.4 3.0 4.4 4.4 4.4 5.7 ✗ ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✓ ✗ ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✓ 13.8 28.4 30.4 29.1 34.6 4.7 12.9 12.4 15.3 17.7 Table 1: Effect of different question augmentation with LLaMA-2-7B finetuned on GSM8K or MATH. 4 EXPERIMENTS AND RESULTS 4.1 EXPERIMENTAL SETUP Dataset MetaMathQA-GSM8K 80K 75K MetaMathQA-MATH MetaMathQA 155K 80K 50K 130K 40K 40K 15K 15K 55K 55K 240K 155K 395K Table 2: Number of samples in the proposed MetaMathQA.
2309.12284#26
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
27
Liang Qiu, Yizhou Zhao, Yuan Liang, Pan Lu, Weiyan Shi, Zhou Yu, and Song-Chun Zhu. 2022. Towards socially intelligent agents with mental state transition and human value. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 146–158, Edinburgh, UK. As- sociation for Computational Linguistics. Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, and Yejin Choi. 2021. proScript: Partially ordered scripts generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2138–2149, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics. Roger C. Schank. 1977. Scripts, plans, goals, and un- derstanding : an inquiry into human knowledge struc- tures / . L. Erlbaum Associates ;, Hillsdale, N.J. :.
2309.11737#27
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
27
Datasets. We use two popular mathematical reasoning bench- marks: (i) GSM8K [12] is a dataset consisting of high-qual- ity grade school math problems, containing 7,473 training sam- ples and 1,319 testing samples; and (ii) MATH [21] dataset consists of high school math competition problems that span seven subjects including Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, In- termediate Algebra, and Precalculus. It contains 7,500 and 5,000 samples for training and testing, respectively. Questions in GSM8K [12] take between 2 and 8 steps to reach the answer, while MATH is much more challenging. Models. We use the current state-of-the-art open-source model LLaMA-2 [62], including three different parameter sizes: 7B, 13B, and 70B, as the base model for fine-tuning. GPT-3.5-Turbo is used for rephrasing questions as well as generating answers in all four augmentations, where the temperature is set to 0.7 as in [66]. The LLaMA-2-7B and LLaMA-2-13B are trained by fully fine-
2309.12284#27
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
28
Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020. A dataset for tracking entities in open domain procedural text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6408–6417, Online. Association for Computa- tional Linguistics. Serra Sinem Tekiro˘glu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1177–1190, On- line. Association for Computational Linguistics. Jingjing Wang, Changlong Sun, Shoushan Li, Jiancheng Wang, Luo Si, Min Zhang, Xiaozhong Liu, and Guodong Zhou. 2019. Human-like decision making: Document-level aspect sentiment classification via hierarchical reinforcement learning. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5581–5590, Hong Kong, China. Association for Computational Linguistics.
2309.11737#28
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.11737
29
Hui Yang, Tat-Seng Chua, Shuguang Wang, and Chun- Keat Koh. 2003. Structured use of external knowl- edge for event-based open domain question answer- ing. In Proceedings of the 26th annual international ACM SIGIR conference on Research and develop- ment in informaion retrieval, pages 33–40. Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, and Sebastian Gehrmann. 2022. Synthbio: A case study in human- ai collaborative curation of text datasets. Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020. Reasoning about goals, steps, and temporal ordering In Proceedings of the 2020 Con- with WikiHow. ference on Empirical Methods in Natural Language Processing (EMNLP), pages 4630–4639, Online. As- sociation for Computational Linguistics. Li Zhang, Hainiu Xu, Yue Yang, Shuyan Zhou, Weiqiu You, Manni Arora, and Chris Callison-Burch. 2023a. Causal reasoning of entities and events in procedural In Findings of the Association for Compu- texts. tational Linguistics: EACL 2023, pages 415–431, Dubrovnik, Croatia. Association for Computational Linguistics.
2309.11737#29
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
29
Baselines. The proposed methods are compared with (i) closed-source models such as GPT-3.5-Turbo [47], PaLM [11]; (ii) open-source models such as LLaMA-1 [61], LLaMA-2 [62]; (iii) Supervised Fine-Tuning (SFT), which uses the training set of the original GSM8K or MATH (iv) Rejection sampling Fine-Tuning (RFT) [69] generates and collects correct reasoning datasets; paths as augmented data for fine-tuning; (v) WizardMath [38] which generates samples and trains two reward models using ChatGPT 1 to select samples for fine-tuning.
2309.12284#29
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
30
Tianyi Zhang, Isaac Tham, Zhaoyi Hou, Jiaxuan Ren, Leon Zhou, Hainiu Xu, Li Zhang, Lara Martin, Rotem Dror, Sha Li, Heng Ji, Martha Palmer, Su- san Windisch Brown, Reece Suchocki, and Chris Callison-Burch. 2023b. Human-in-the-loop schema induction. In Proceedings of the 61st Annual Meet- ing of the Association for Computational Linguis- tics (Volume 3: System Demonstrations), pages 1–10, Toronto, Canada. Association for Computational Lin- guistics. Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2021. Temporal reasoning on implicit events from distant supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1361–1371, Online. Association for Computa- tional Linguistics.
2309.11737#30
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
30
Diversity Gain. We use the diversity gain [5] to measure to what extent a new dataset added to a basic dataset can improve the overall data diversity. For a base dataset Dbase = {xi = (qi, ri, ai)}N i=1 with N samples, and a new dataset Dnew = {xi = (qi, ri, ai)}M i=1 with M samples, the diversity gain is defined as: Dnew relative to Dbase as: dgain = 1 minxj ∈Dbase (∥f (xi) − f (xj)∥2 2), M where f is the feature extractor and we use the OpenAI Embedding API text-embedding-ada-002 for feature extraction. For Figure 2, we change the data size of base data and select a fixed set of 20K new data points that the model has not encountered to form Dnew. 4.2 RESULTS ON GSM8K AND MATH
2309.12284#30
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
31
Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, and Graham Neubig. 2022. Show me more details: Discover- ing hierarchies of procedures from semi-structured web data. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2998–3012, Dublin, Ireland. Association for Computational Linguistics. # A Human-in-the-loop Data Generation Prompting Details
2309.11737#31
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
31
4.2 RESULTS ON GSM8K AND MATH Table 2 illustrates the detailed description of our MetaMathQA collection and Table 3 shows the testing accuracy on GSM8K and MATH. As can be seen, for open-source models with 1-10B parameters, MetaMath achieves the state-of-the-art performance. Compared to the previous best LLM, MetaMath achieves a large improvement of 11.6% on GSM8K and 9.1% on MATH in testing accuracy, showing that finetuning on our MetaMathQA data is effective. 1https://openai.com/ 6 Technical Report As for LLMs with 11-50B parameters, the proposed MetaMath performs the best. Par- ticularly, on both GSM8K and MATH, MetaMath achieves higher accuracy than SFT, RFT, and WizardMath by a large mar- gin (+7%), demonstrating the effectiveness of the MetaMath data in improving mathe- matical reasoning ability. Furthermore, for LLMs with 51-70B parameters, again, Meta- Math achieves the highest testing accuracy. Particularly, MetaMath is better than GPT- 3.5-Turbo on GSM8K, which is used for generating augmented data for finetuning. 4.3 EFFECT OF AUGMENTATIONS
2309.12284#31
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
32
# A Human-in-the-loop Data Generation Prompting Details There are three implementation details about the prompting setup for Human-in-the-loop data gen- eration. First, in all prompts, we include "overall goal", which is the goal for the script from proScript, while "step goal" is the goal the person needs to make a decision on as well as the goal we refer to in the paper. We include the "overall goal" just to provide additional context information. Second, for all prompts, the results would be the scenarios with the correct answer being option 1. We also swap two options in these prompts so that we can get hard scenarios with the correct answer being option 2. Third, for all prompts, we provide four hand- written demonstrations, all of which come from the 10 held-out training scripts described in Section 3. We use the insertion mode of the provided Ope- nAI API, text-davinci-003 as the model, and 0.75 as the temperature. Verb Phrase
2309.11737#32
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
32
4.3 EFFECT OF AUGMENTATIONS In this section, we conduct experiments to study the effect of augmentations in Meta- Math. We first finetune the LLaMA-2-7B model on augmented GSM8K (MetaMath- GSM8K) data, and test the finetuned model on GSM8K and MATH. Table 1 shows the testing accuracy of different combina- tions of augmentations. As can be seen, on GSM8K, the models trained on answer augmentation (AnsAug) or rephrasing aug- mentation achieve much higher accuracy than SFT, which is only trained on the training set. Combing answer augmenta- tion and rephrasing augmentation data for fine-tuning leads to a slightly higher accu- racy, which is further improved by about 4% through merging the FOBAR and SV augmentation data. As for MATH, Meta- Math trained only on MetaMahQA-GSM8K data performs better than SFT, suggesting its effectiveness in generalizing to unseen mathematical tasks.
2309.12284#32
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
33
Prompt Step 1: Doe wants to go {overall goal}. One of the steps towards that is {step goal}. Doe has two options: 1) {option 1} or 2) {op- tion 2} Because Doe [INSERT], Doe chooses option 1. Prompt Step 2: Doe wants to {overall goal}. One of the steps towards that is to {step goal}. Doe has two options: 1) {option 1} or 2) {op- tion 2} Because [INSERT], Doe {scenario- base}. Therefore, option 2 is not avail- able or not desirable for Doe and Doe chooses option 1. # User Profile Prompt: A person Doe would like to {overall goal} and need to finish the step of {step goal}. Doe now has two options: option 1 is to {option 1} and option 2 is to {op- tion 2}. Eventually, Doe picked option 1 over the other. Make a comprehensive user profile for Doe without explicitly mentioning the choice Doe made. Must-includes: preferences, interests, fi- nancial situation, etc. Optional: occupations, hobbies, gender, lifestyle Avoid: long sentences User Profile: # B Decision Prediction Prompting Details During inference time, we provide 9 in-context demonstrations, which are the combination of 3 difficulty levels and 3 labels. We also set the temperature to 0 to ensure consistency across runs. # Naive Prompt
2309.11737#33
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
33
We also conduct an experiment by fine- tuning LLaMA-2-7B on the MetaMathQA- MATH data then evaluate the model on GSM8K and MATH. Table 1 shows the testing accuracy. Again, MetaMath trained on AnsAug or rephrasing augmentation data performs much better than SFT. Fur- thermore, merging all augmented data to- gether for fine-tuning is better than merg- ing AnsAug and rephrasing augmentation data, demonstrating the effectiveness of SV and FOBAR augmentation data in improv- ing mathematical reasoning ability. More- over, for the unseen GSM8K task, Meta- Math trained on MetaMathQA-MATH data is significantly better than SFT (+20%). # Model # Model # #params # GSM8K MATH
2309.12284#33
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
34
# Naive Prompt [Goal]: {step goal} [Option 1]: {option 1} [Option 2]: {option 2} [Scenario]: {scenario} [Question]: Given the Scenario, which option above is the better choice in order to achieve the Goal? 1) Option 1 2) Option 2 3) Either one, since they have similar effect when it comes to the goal [Answer]: # Story Prompt A person Doe needs to {step goal}. Now there are two options for Doe: we can either {option 1} (Option 1) or {option 2} (Option 2). Suppose Doe {scenario}. [Question]: Given the Scenario, which option above is the better choice in order to achieve the Goal? 1) Option 1 2) Option 2 3) Either one, since they have similar effect when it comes to the goal [Answer]: # C Difficulty Levels The difficulty levels are annotated based on how many steps of reasoning are required to get to the correct option choice. To illustrate such differences among these levels, consider the following goal and corresponding scenarios (Table 4): # Level Easy Example Scenario: have no internet connec- tion Choice: Option 1 Medium Scenario: have special requests # about the book Choice: Option 1 # Hard Scenario: is 3 am in the morning Choice: Option 2 # Medium Scenario:
2309.11737#34
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
34
closed-source models - - 8B 62B 540B 540B 540B 8B 62B 540B GPT-4 [48] GPT-3.5-Turbo [47] PaLM [11] PaLM [11] PaLM [11] PaLM-2 [2] Flan-PaLM 2 [2] Minerva [31] Minerva [31] Minerva [31] 92.0 80.8 4.1 33.0 56.5 80.7 84.7 16.2 52.4 58.8 42.5 34.1 1.5 4.4 8.8 34.3 33.2 14.1 27.6 33.6 open-source models (1-10B) 11.0 7B 14.6 7B 6.8 7B 6.8 7B 31.2 7B 34.9 6B 32.4 6B 51.6 7B 24.5 7B 41.6 7B 50.3 7B 54.9 7B 66.5 7B LLaMA-1 [61] LLaMA-2 [62] MPT [44] Falcon [51] InternLM [27] GPT-J [63] ChatGLM 2 [71] Qwen [1] Baichuan-2 [3] SFT [62] RFT [69] WizardMath [38] MetaMath
2309.12284#34
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
35
Medium Scenario: have special requests # about the book Choice: Option 1 # Hard Scenario: is 3 am in the morning Choice: Option 2 # Medium Scenario: User Profile: - Name: Doe Interests: American history Financial situation: moderate Occupation: student Spcecial circumstances: has a very bad sore throat Hobbies: reading books Gender: male Lifestyle: balanced Choice: Option 2 Table 4: Different levels of reasoning in the library hours example Goal: find out the library’s hours Option 1: call the library Option 2: search online for the library’s hours • Easy: Scenarios explicitly refer to one of the options or something closely related to one of the options. Only one easy reasoning step is required for such decision-making. For ex- ample, “internet connection” (Scenario) is di- rectly related to “search online” (Option 2) and makes it infeasible.
2309.11737#35
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
35
ChatGLM 2 [71] Qwen [1] Baichuan-2 [3] SFT [62] RFT [69] WizardMath [38] MetaMath 2.9 2.5 3.0 2.3 - - - - 5.6 - - 10.7 19.8 open-source models (11-50B) 17.8 13B 35.6 33B 28.7 13B 42.2 34B 15.2 30B 19.6 40B - 30B 27.6 13B 52.8 13B 50.0 13B 54.8 13B 63.9 13B 72.3 13B LLaMA-1 [61] LLaMA-1 [61] LLaMA-2 [62] LLaMA-2 [62] MPT [44] Falcon [51] GAL [60] Vicuna [10] Baichuan-2 [3] SFT [62] RFT [69] WizardMath [38] MetaMath 3.9 7.1 3.9 6.2 3.1 2.5 12.7 - 10.1 - - 14.0 22.4 open-source models (51-70B) 50.9 65B 56.8 70B 64.8 70B 81.6 70B 82.3 70B LLaMA-1 [61] LLaMA-2
2309.12284#35
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.11737
36
• Medium: Scenarios implicitly refer to one of the options or something related to one of the options. These choices typically require ei- ther some degree of commonsense knowledge or multiple steps of reasoning. For example, “special requests” (Scenario) implies that the person needs to talk to a staff member, which is related to “call the library” (Option 1); for the same reason, “has a very bad sore throat” (Scenario) implies that the person cannot talk, which is related to “call the library” (Option 1). • Hard: Scenarios implicitly refer to something related to one of the options. These choices typically require the combination of common- sense knowledge and multiple steps of reason- ing. For example, one would need common- sense to know that“3 am in the morning” (Sce- nario) implies that the library is very likely to be closed; then one needs to further reason that in a closed library, no one would pick up the phone. This makes “call the library” (Option 1) infeasible. # D Qualitative Error Analysis Here we provided two qualitative analyses where the prediction is different from the ground truth answer: Example 1
2309.11737#36
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.11737
37
# D Qualitative Error Analysis Here we provided two qualitative analyses where the prediction is different from the ground truth answer: Example 1 Goal: purchase a plane ticket - Option 1: purchase a plane ticket to a major city but far from the desert - Option 2: purchase a plane ticket to a small city but right next to the desert - Scenario: hate connecting flights - Level: hard - True Answer: option 1 - Predicted Answer: option 2 Analysis: for the example above, a flight to a major city & far from the desert would most likely require a connecting flight as the next step; a flight to a small city near the desert would be ideal since it does not require a connecting flight. The model is not able to conduct these reasoning steps given the output. # Example 2 - Goal: pack hiking backpacks - Option 1: bring process food for every meal - Option 2: bring raw foods and some cookware to cook at the campsite - Scenario: want to enjoy every minute of the holiday - Level: medium - True Answer: option 1 - Predicted Answer: option 2 Analysis: if the person brings raw foods and cooks them at the campsite, most likely they would have to spend more time on the cooking instead of en- joying the hike. Therefore option 1 is preferable.
2309.11737#37
Choice-75: A Dataset on Decision Branching in Script Learning
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios.
http://arxiv.org/pdf/2309.11737
Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch
cs.AI
null
null
cs.AI
20230921
20230921
[]
2309.12284
37
— Table 3: Comparison of testing accuracy to existing LLMs on GSM8K and MATH. ‡Due to the computing resource limitation, we finetune MetaMath-70B using QLoRA [14]. 4.4 DISCUSSION FROM A PERPLEXITY PERSPECTIVE According to the Superficial Alignment Hypothesis proposed by Zhou et al. [73], the capability of a model is rooted in pretraining, and data from downstream tasks acts to activate the inherent 7 # Technical Report Figure 3: Lower perplexity of MetaMathQA. Figure 4: Accuracy correlates positively with diversity.
2309.12284#37
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
38
7 # Technical Report Figure 3: Lower perplexity of MetaMathQA. Figure 4: Accuracy correlates positively with diversity. ability of LLMs that has been learned during pretraining. There are two important questions that arise from such a hypothesis: (i) what kind of data is most effective at activating possible latent knowledge, and (ii) why is one dataset better than another at such activation? Our empirical results suggest that, in the mathematical tasks we consider, our MetaMathQA dataset may serve as a superior activator of mathematical knowledge. Yet, why MetaMath yields superior performance than training on the data of correct answer-only or GSM8K CoT is unclear. We speculate that perhaps it is the simplicity of the data that matters. As shown in Figure 3, we compute the perplexity [41, 64] for the under-finetuned LLaMA-2-7B model, in terms of answer-only data, GSM8K CoT, and the subsections of MetaMathQA data. The perplexity of MetaMathQA is significantly lower than the other two datasets. This highlights its inherently easy-to-learn nature, which may be more conducive to eliciting bolstered problem-solving abilities from an LLM. This is also aligned with the findings with TinyStories [16], where short and easy story data can help LLMs generate content fluently.
2309.12284#38
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
39
4.5 DISCUSSION FROM A DIVERSITY PERSPECTIVE As shown in Figure 2, naively prompting GPT-3.5-Turbo for answer augmentation leads to a clear accuracy saturation. After accuracy saturation, increasing the AnsAug data only yields a limited performance gain. For instance, using 80K answer augmentation data to train a LLaMA-2 7B model leads to a 59.6% accuracy, adding new 20K AnsAug data would only take 0.1% performance gain. This is due to the homogeneity of the additional samples, contributing to a diversity gain of only 0.05 (shown in Figure 4). In comparison, adding the same amount of data generated by question bootstrapping leads to a significant performance boost, which is due to the noticeable diversity gain brought by question bootstrapping. As shown in Figure 4, adding 20K data from Rephrasing, FOBAR, or SV takes an increasing diversity gain, thus causing a 0.4%, 2.3%, and 2.6% accuracy gain, respectively. This experiment demonstrates a positive correlation (the Pearson coefficient is 0.972) between the diversity brought by the bootstrapping methods and accuracy. This is also aligned with the success of MetaMath, which is trained with the diverse MetaMathQA dataset including 4 kinds of data reflecting both the forward and backward reasoning paths.
2309.12284#39
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
40
4.6 EVALUATING THE REVERSAL MATHEMATICAL CAPABILITY The Reversal Curse [4], where LLMs trained from a sentence “A is B” are not able to generalize to answer “B is A”, also aligns with the observation in this paper that LLMs lack backward mathematical reasoning ability. To evaluate the backward mathematical capability, we propose a GSM8K-Backward test set, including 1270 backward questions by using SV and FOBAR to augment the original GSM8K test set (as shown in Example 3.3 and Example 3.4). Figure 6 shows the accuracy comparison of different 7B mathematical LLMs between the GSM8K and GSM8K-Backward datasets. As can * TE wines FE wi go Ps Ss a < 10. Dewan baiwad <= ae 3 or Nea ‘OC Eps GLa E] Wratten G] Mann g wo. B wo. 2 a. ee Figure 5: Combing RFT [69] dataset with our MetaMathQA leads to a performance drop. Figure 6: The accuracy gap between GSM8K and GSM8K- Backward. Figure 7: Testing accuracy on questions with short length, medium length and long length. 8 Technical Report
2309.12284#40
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
41
Figure 6: The accuracy gap between GSM8K and GSM8K- Backward. Figure 7: Testing accuracy on questions with short length, medium length and long length. 8 Technical Report be seen, existing LLMs struggle to solve mathematical problems in backward rationales and our MetaMath has a significant improvement on both datasets. Specifically, the ways where different LLMs solve the backward mathematical problem are illustrated through examples in Appendix A.3. 4.7 REASONING PATHS WITH INCORRECT ANSWER CAN ALSO BE USEFUL We conduct experiments on GSM8K using LLaMA-2-7B to study whether the answer augmentation samples with incorrect answers are helpful for finetuning the LLM. We randomly choose 7,473 reasoning paths with incorrect answers from the generated answers, and we ensure that the size is the same as that of the original training set. From Table 4, we observe that the model finetuned on the augmented data with incorrect answers is actually better than SFT, which is counter-intuitive. We hypothesize that although the final answer is incorrect, some intermediate reasoning steps are correct (see Example 4.1). These reasoning steps can still be useful supervision signals. Our results are also aligned with [34], where they discover the importance of intermediate process supervision for reasoning. # Example 4.1: A Reasoning Path with Incorrect Answer
2309.12284#41
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
42
# Example 4.1: A Reasoning Path with Incorrect Answer Question: Tonya is in a hamburger eating contest. Each hamburger is 4 ounces. Last year the winner ate 84 ounces. How many hamburgers does she have to eat to beat last year’s winner? (Ground-truth answer is 22) Generated Answer: If each hamburger is 4 ounces and last year’s winner ate 84 ounces, then Tonya needs to eat 84/4 = 21 hamburgers to beat last year’s winner. The answer is: 21 4.8 MORE DATA IS NOT ALWAYS BETTER There are also previous works that augment mathematical reasoning data for fine-tuning [38, 69]. An interesting question is whether combining existing augmented datasets with our MetaMathQA can improve the overall mathematical problem-solving performance. We select the RFT [69] dataset as the external dataset. Figure 5 shows that merging the RFT data into MetaMathQA actually hurts the performance, indicating that the RFT data may not be beneficial to MetaMath. Such a phenomenon is consistently observed in the MetaMathQA dataset under different sizes (from 20K to 100K), and the added RFT dataset is about 47K. The performance drop implies that more augmented data does not always help the generalization. # Example 4.2: Case Study in Test Set of GSM8K
2309.12284#42
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
43
Question: Darrell and Allen’s ages are in the ratio of 7:11, If their total age now is 162, calculate Allen’s age 10 years from now. (Ground-truth answer is 109) SFT Answer: The total ratio representing their ages is 7+11=<<7+11=18>>18. The fraction representing Darrell’s age is 7/18, and since their total age now is 162, Darrell’s age is 7/18*162 = <<7/18*162=60>>60. The fraction representing Allen’s age is 11/18, and since their total age now is 162, Allen’s age is 11/18*162 = <<11/18*162=84>>84. Ten years from now, Allen’s age will be 84 + 10 = <<84+10=94>>94. RFT Answer: The total ratio representing Darrell and Allen’s ages is 7+11 = <<7+11=18>>18. The fraction representing Darrell’s age from the total ratio is 7/18, and since their total age is 162, Darrell is 7/18*162 = 100 years old. Allen’s age, from the total ratio, is 11/18, and since Darrell is 100, Allen is
2309.12284#43
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
44
Darrell is 7/18*162 = 100 years old. Allen’s age, from the total ratio, is 11/18, and since Darrell is 100, Allen is 11/18*109 = <<11/18*100=66>>66 years old. Ten years from now, Allen will be 66 + 10 = 76 years old. WizardMath Answer: We know Darrell and Allen’s ages are in the ratio of 7:11, which means the total ratio representing their ages is 7+11 = 18. If their total age now is 162, we can calculate Darrell’s age by using the ratio. Specifically, Darrell’s age is 7/18 of 162, which is 7/18*162 = 49 years old. ... The answer is: 76. Our Answer: The ratio of Darrell’s age to Allen’s age is 7:11. Let’s assume Darrell’s age is 7x and Allen’s age is 11x. The total age of Darrell and Allen is 7x + 11x = 18x. We are given that the total age is 162, so 18x = 162. Dividing both sides by 18, we get x = 9. Therefore, Allen’s age is 11x = 11 * 9 = 99. 10 years from now,
2309.12284#44
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
46
4.9 ERROR ANALYSIS We have demonstrated that – across multiple scales – our MetaMath models can achieve stellar problem-solving performance. Yet, it is important to consider the characteristics of problems that induce errors in MetaMath and existing open-source mathematical models. In particular, we consider the relationship between question length and model performance. To investigate, we divide the 9 Technical Report GSM8K test set into three equally-sized subsets based on the different lengths of questions and calculate the accuracy of the models over each subset. We find in Figure 7 that, MetaMath and related methods struggle under longer questions. However, excitingly, MetaMath always obtains superior performance. We see the study of improving model performance with longer question lengths – for instance, by further augmenting the MetaMathQA dataset – as ripe grounds for future work. # 5 CONCLUDING REMARKS
2309.12284#46
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
47
# 5 CONCLUDING REMARKS In this paper, we focus on improving the mathematical problem-solving abilities of open-source LLMs. By bootstrapping mathematical questions on GSM8K and MATH, we present a high-quality and diverse dataset MetaMathQA, involving forward reasoning and backward reasoning samples. Our family of LLMs finetuned on MetaMathQA, called MetaMath, have achieved state-of-the-art on mathematical benchmarks among all open-source LLMs. Remarkably, MetaMath-7B reaches 66.5% on GSM8K and 19.8% on MATH, surpassing previous open-source LLMs by a significant margin. Our work further emphasizes the importance of the characteristics of the training data on boosting LLM problem-solving capabilities. # ACKNOWLEDGEMENT The authors would like to sincerely thank Katherine M. Collins from University of Cambridge for her valuable insights and suggestions. # REFERENCES [1] Alibaba. Qwen-7b. Technical Report, 2023.
2309.12284#47
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
48
[2] R. Anil, A. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, E. Chu, J. Clark, L. Shafey, Y. Huang, K. Meier-Hellstern, G. Mishra, E. Moreira, M. Omernick, K. Robinson, S. Ruder, Y. Tay, K. Xiao, Y. Xu, Y. Zhang, G. Abrego, J. Ahn, J. Austin, P. Barham, J. Botha, J. Bradbury, S. Brahma, K. Brooks, M. Catasta, Y. Cheng, C. Cherry, C. Choquette-Choo, A. Chowdhery, C. Crepy, S. Dave, M. Dehghani, S. Dev, J. Devlin, M. D´ıaz, N. Du, E. Dyer, V. Feinberg, F. Feng, V. Fienber, M. Freitag, X. Garcia, S. Gehrmann, L. Gonzalez, G. Gur-Ari, S. Hand,
2309.12284#48
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
49
Fienber, M. Freitag, X. Garcia, S. Gehrmann, L. Gonzalez, G. Gur-Ari, S. Hand, H. Hashemi, L. Hou, J. Howland, A. Hu, J. Hui, J. Hurwitz, M. Isard, A. Ittycheriah, M. Jagielski, W. Jia, K. Kenealy, M. Krikun, S. Kudugunta, C. Lan, K. Lee, B. Lee, E. Li, M. Li, W. Li, Y. Li, J. Li, H. Lim, H. Lin, Z. Liu, F. Liu, M. Maggioni, A. Mahendru, J. Maynez, V. Misra, M. Moussalem, Z. Nado, J. Nham, E. Ni, A. Nystrom, A. Parrish, M. Pellat, M. Polacek, A. Polozov, R. Pope, S. Qiao, E. Reif, B. Richter, P. Riley, A. Ros, A. Roy, B. Saeta, R. Samuel, R. Shelby, A. Slone, D. Smilkov, D.
2309.12284#49
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
52
[5] J. Bilmes. Submodularity In Machine Learning and Artificial Intelligence. arXiv:2202.00132, 2022. Preprint [6] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language Models are Few-Shot Learners. In Neural Information Processing Systems, 2020.
2309.12284#52
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
54
10 # Technical Report D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating Large Language Models Trained on Code. Preprint arXiv:2107.03374, 2021. [8] W. Chen, X. Ma, X. Wang, and W. Cohen. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. Preprint arXiv:2211.12588, 2022.
2309.12284#54
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
55
[9] Y. Chen, R. Zhong, S. Zha, G. Karypis, and H. He. Meta-learning via Language Model In-context Tuning. In Annual Meeting of the Association for Computational Linguistics, 2022. [10] W. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. Gonzalez, I. Stoica, and E. Xing. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality. Technical Report, 2023.
2309.12284#55
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
56
[11] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. Dai, T. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov,
2309.12284#56
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
58
[12] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman. Training Verifiers to Solve Math Word Problems. Preprint arXiv:2110.14168, 2021. [13] K. Collins, A. Jiang, S. Frieder, L. Wong, M. Zilka, U. Bhatt, T. Lukasiewicz, Y. Wu, J. Tenen- baum, W. Hart, T. Gowers, W. Li, A. Weller, and M. Jamnik. Evaluating Language Models for Mathematics through Interactions. Preprint arXiv:2306.01694, 2023. [14] T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Preprint arXiv:2305.14314, 2023.
2309.12284#58
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
59
[15] J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In North American Chapter of the Association for Computational Linguistics, 2019. [16] R. Eldan and Y. Li. TinyStories: How Small Can Language Models Be and Still Speak Coherent English? Preprint arXiv:2305.07759, 2023. [17] Y. Fu, H. Peng, L. Ou, A. Sabharwal, and T. Khot. Specializing Smaller Language Models towards Multi-Step Reasoning. In International Conference on Machine Learning, 2023. [18] Y. Fu, H. Peng, A. Sabharwal, P. Clark, and T. Khot. Complexity-Based Prompting for Multi- step Reasoning. In International Conference on Learning Representations, 2023. [19] J. Gou, B. Yu, S. Maybank, and D. Tao. Knowledge Distillation: A Survey. International Journal of Computer Vision, 2021. [20] T. He, C. Shen, Z. Tian, D. Gong, C. Sun, and Y. Yan. Knowledge Adaptation for Efficient Semantic Segmentation. In Computer Vision and Pattern Recognition, 2019.
2309.12284#59
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
60
[21] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring Mathematical Problem Solving With the MATH Dataset. In Neural Information Processing Systems: Datasets and Benchmarks, 2021. [22] G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network. Preprint arXiv:1503.02531, 2015. 11 # Technical Report [23] N. Ho, L. Schmid, and S. Yun. Large Language Models Are Reasoning Teachers. In Annual Meeting of the Association for Computational Linguistics, 2023. [24] C. Hsieh, C. Li, C. Yeh, H. Nakhost, Y. Fujii, A. Ratner, R. Krishna, C. Lee, and T. Pfister. Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. In Annual Meeting of the Association for Computational Linguistics, 2023.
2309.12284#60
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
61
[25] J. Huang, S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu, and J. Han. Large Language Models Can Self-Improve. Preprint arXiv:2210.11610, 2022. [26] S. Imani, L. Du, and H. Shrivastava. MathPrompter: Mathematical Reasoning using Large Language Models. Preprint arXiv:2303.05398, 2023. [27] InternLM. InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities. Technical Report, 2023. [28] W. Jiang, H. Shi, L. Yu, Z. Liu, Y. Zhang, Z. Li, and J. Kwok. Forward-Backward Reasoning in Large Language Models for Mathematical Verification. Preprint arXiv:2308.07758, 2023. [29] W. Jiang, Y. Zhang, and J. Kwok. Effective Structured-Prompting by Meta-Learning and Representitive Verbalizer. In International Conference on Machine Learning, 2023. [30] N. Kilbertus, G. Parascandolo, and B. Sch¨olkopf. Generalization in anti-causal learning. Preprint arXiv:1812.00524, 2018.
2309.12284#61
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
63
[32] R. Li, L. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy- Poirier, J. Monteiro, O. Shliazhko, N. Gontier, N. Meade, A. Zebaze, M. Yee, L. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, R. Murthy, J. Stillerman, S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Fahmy, U. Bhattacharyya, W. Yu, S. Singh, S. Luccioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding, C. Schlesinger, H. Schoelkopf, J. Ebert, T.
2309.12284#63
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
65
[33] S. Li, J. Chen, Y. Shen, Z. Chen, X. Zhang, Z. Li, H. Wang, J. Qian, B. Peng, Y. Mao, W. Chen, and X. Yan. Explanations from Large Language Models Make Small Reasoners Better. Preprint arXiv:2210.06726, 2022. [34] H. Lightman, V. Kosaraju, Y. Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, and K. Cobbe. Let’s Verify Step by Step. Preprint arXiv:2305.20050, 2023. [35] W. Liu, B. Dai, A. Humayun, C. Tay, C. Yu, L. Smith, J. Rehg, and L. Song. Iterative Machine Teaching. In International Conference on Machine Learning, 2017. [36] W. Liu, Z. Liu, H. Wang, L. Paull, B. Sch¨olkopf, and A. Weller. Iterative Teaching by Label Synthesis. In Neural Information Processing Systems, 2021.
2309.12284#65
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
66
[37] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. Preprint arXiv:1907.11692, 2019. [38] H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng, Q. Lin, S. Chen, and D. Zhang. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. Preprint arXiv:2308.09583, 2023. 12 Technical Report [39] Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao, J. Ma, Q. Lin, and D. Jiang. WizardCoder: Empowering Code Large Language Models with Evol-Instruct. Preprint arXiv:2306.08568, 2023. [40] L. Magister, J. Mallinson, J. Adamek, E. Malmi, and A. Severyn. Teaching Small Language Models to Reason. In Annual Meeting of the Association for Computational Linguistics, 2023.
2309.12284#66
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]
2309.12284
67
[41] M. Marion, A. ¨Ust¨un, L. Pozzobon, A. Wang, M. Fadaee, and S. Hooker. When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale. Preprint arXiv:2309.04564, 2023. [42] S. Min, M. Lewis, L. Zettlemoyer, and H. Hajishirzi. MetaICL: Learning to Learn In Context. In North American Chapter of the Association for Computational Linguistics, 2022. [43] S. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh. Improved Knowledge Distillation via Teacher Assistant. In AAAI Conference on Artificial Intelligence, 2020. [44] MosaicML. Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs. Technical Report, 2023. [45] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. Preprint arXiv:2203.13474, 2022.
2309.12284#67
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives without extra knowledge, which results in a new dataset called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4% on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use.
http://arxiv.org/pdf/2309.12284
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu
cs.CL, cs.AI
Technical Report, Work in Progress. Project Page: https://meta-math.github.io/
null
cs.CL
20230921
20231009
[ { "id": "2302.13971" }, { "id": "2308.09583" }, { "id": "2305.20050" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2211.09085" }, { "id": "2305.10403" }, { "id": "1812.00524" }, { "id": "2202.00132" }, { "id": "2309.12288" }, { "id": "2305.07759" }, { "id": "2309.04564" }, { "id": "2107.03374" }, { "id": "1811.10959" }, { "id": "2307.09288" }, { "id": "2306.01116" }, { "id": "2203.13474" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2308.07758" }, { "id": "2305.06161" }, { "id": "2309.05653" }, { "id": "2303.05398" }, { "id": "2210.06726" }, { "id": "2212.09561" }, { "id": "2211.12588" }, { "id": "1503.02531" }, { "id": "2210.11610" }, { "id": "1907.11692" }, { "id": "2306.08568" }, { "id": "2210.02414" }, { "id": "2305.14314" }, { "id": "2305.11206" }, { "id": "2309.02144" }, { "id": "2306.01694" } ]