doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.05922 | 37 | Juncheng B Li, Jackson Sam Michaels, Laura Yao, Lijun Yu, Zach Wood-Doughty, and Florian Metze. 2023a. Audio-journey: Efficient visual+ llm-aided audio en- codec diffusion. In Workshop on Efficient Systems for Foundation Models@ ICML2023.
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023b. Helma: A large- scale hallucination evaluation benchmark for large language models. arXiv preprint arXiv:2305.11747.
KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wen- hai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. 2023c. Videochat: Chat-centric video un- derstanding. arXiv preprint arXiv:2305.06355.
Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, and Soujanya Po- ria. 2023d. Chain of knowledge: A framework for grounding large language models with structured knowledge bases. arXiv preprint arXiv:2305.13269. | 2309.05922#37 | A Survey of Hallucination in Large Foundation Models | Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs. | http://arxiv.org/pdf/2309.05922 | Vipula Rawte, Amit Sheth, Amitava Das | cs.AI, cs.CL, cs.IR | null | null | cs.AI | 20230912 | 20230912 | [
{
"id": "2307.12168"
},
{
"id": "2308.11764"
},
{
"id": "2308.06394"
},
{
"id": "2305.06355"
},
{
"id": "2108.07258"
},
{
"id": "2305.11747"
},
{
"id": "2210.07688"
},
{
"id": "2307.08629"
},
{
"id": "2305.10355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13903"
},
{
"id": "2305.13269"
},
{
"id": "1809.02156"
},
{
"id": "2307.15343"
},
{
"id": "2309.02654"
},
{
"id": "2307.16372"
},
{
"id": "2307.02185"
},
{
"id": "2307.03987"
},
{
"id": "2309.01219"
},
{
"id": "2303.02961"
},
{
"id": "2304.13734"
},
{
"id": "2306.16092"
},
{
"id": "2302.12813"
}
] |
2309.05898 | 38 | shows that for most contexts, LLaMa-2 acts very strategically. More specifically, LLaMa-2 appears to be able to recognize the differences in the payoff structures and alter its choice of actions accordingly, although not necessarily always playing the equilibrium. In the "environment" context, this sophistication suffers a slight degradation as LLaMa-2 becomes unable to tell Prisonerâs Delight and Stag Hunt apart, with "friendsharing" suffering from the same problem on top of also being unable to tell the Prisonerâs Dilemma and Snowdrift apart. Summing up, while the results for the dominance analysis clearly indicate that LLaMa-2 is more context-driven than GPT-4, it seems that unlike the latter, the former is more capable of telling different game structures apart and adapting it strategy accordingly. | 2309.05898#38 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05922 | 38 | Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023e. Eval- uating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco:
Common objects in context. In Computer Visionâ ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740â755. Springer.
Hui Liu and Xiaojun Wan. 2023. Models see hallucina- tions: Evaluating the factuality in video captioning. arXiv preprint arXiv:2303.02961.
Junyu Luo, Cao Xiao, and Fenglong Ma. 2023. Zero- resource hallucination prevention for large language models. arXiv preprint arXiv:2309.02654.
Potsawee Manakul, Adian Liusie, and Mark J. F. Gales. 2023. Selfcheckgpt: Zero-resource black-box hal- lucination detection for generative large language models. | 2309.05922#38 | A Survey of Hallucination in Large Foundation Models | Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs. | http://arxiv.org/pdf/2309.05922 | Vipula Rawte, Amit Sheth, Amitava Das | cs.AI, cs.CL, cs.IR | null | null | cs.AI | 20230912 | 20230912 | [
{
"id": "2307.12168"
},
{
"id": "2308.11764"
},
{
"id": "2308.06394"
},
{
"id": "2305.06355"
},
{
"id": "2108.07258"
},
{
"id": "2305.11747"
},
{
"id": "2210.07688"
},
{
"id": "2307.08629"
},
{
"id": "2305.10355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13903"
},
{
"id": "2305.13269"
},
{
"id": "1809.02156"
},
{
"id": "2307.15343"
},
{
"id": "2309.02654"
},
{
"id": "2307.16372"
},
{
"id": "2307.02185"
},
{
"id": "2307.03987"
},
{
"id": "2309.01219"
},
{
"id": "2303.02961"
},
{
"id": "2304.13734"
},
{
"id": "2306.16092"
},
{
"id": "2302.12813"
}
] |
2309.05922 | 39 | Nick McKenna, Tianyi Li, Liang Cheng, Moham- mad Javad Hosseini, Mark Johnson, and Mark Steed- man. 2023. Sources of hallucination by large lan- guage models on inference tasks. arXiv preprint arXiv:2305.14552.
Niels Mündler, Jingxuan He, Slobodan Jenko, and Mar- tin Vechev. 2023. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation.
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813.
Jonas Pfeiffer, Francesco Piccinno, Massimo Nicosia, Xinyi Wang, Machel Reid, and Sebastian Ruder. 2023. mmt5: Modular multilingual pre-training solves source language hallucinations.
Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156. | 2309.05922#39 | A Survey of Hallucination in Large Foundation Models | Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs. | http://arxiv.org/pdf/2309.05922 | Vipula Rawte, Amit Sheth, Amitava Das | cs.AI, cs.CL, cs.IR | null | null | cs.AI | 20230912 | 20230912 | [
{
"id": "2307.12168"
},
{
"id": "2308.11764"
},
{
"id": "2308.06394"
},
{
"id": "2305.06355"
},
{
"id": "2108.07258"
},
{
"id": "2305.11747"
},
{
"id": "2210.07688"
},
{
"id": "2307.08629"
},
{
"id": "2305.10355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13903"
},
{
"id": "2305.13269"
},
{
"id": "1809.02156"
},
{
"id": "2307.15343"
},
{
"id": "2309.02654"
},
{
"id": "2307.16372"
},
{
"id": "2307.02185"
},
{
"id": "2307.03987"
},
{
"id": "2309.01219"
},
{
"id": "2303.02961"
},
{
"id": "2304.13734"
},
{
"id": "2306.16092"
},
{
"id": "2302.12813"
}
] |
2309.05922 | 40 | Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- In Proceedings of the IEEE/CVF conference els. on computer vision and pattern recognition, pages 10684â10695.
Logesh Kumar Umapathi, Ankit Pal, and Malaikannan Sankarasubbu. 2023. Med-halt: Medical domain hallucination test for large language models. arXiv preprint arXiv:2307.15343.
Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jian- shu Chen, and Dong Yu. 2023. A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987.
Kyle Wiggers. 2023. Are ai models doomed to always hallucinate?
Jing Wu, Jennifer Hobbs, and Naira Hovakimyan. 2023. Hallucination improves the performance of unsuper- vised visual representation learning. arXiv preprint arXiv:2307.12168. | 2309.05922#40 | A Survey of Hallucination in Large Foundation Models | Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs. | http://arxiv.org/pdf/2309.05922 | Vipula Rawte, Amit Sheth, Amitava Das | cs.AI, cs.CL, cs.IR | null | null | cs.AI | 20230912 | 20230912 | [
{
"id": "2307.12168"
},
{
"id": "2308.11764"
},
{
"id": "2308.06394"
},
{
"id": "2305.06355"
},
{
"id": "2108.07258"
},
{
"id": "2305.11747"
},
{
"id": "2210.07688"
},
{
"id": "2307.08629"
},
{
"id": "2305.10355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13903"
},
{
"id": "2305.13269"
},
{
"id": "1809.02156"
},
{
"id": "2307.15343"
},
{
"id": "2309.02654"
},
{
"id": "2307.16372"
},
{
"id": "2307.02185"
},
{
"id": "2307.03987"
},
{
"id": "2309.01219"
},
{
"id": "2303.02961"
},
{
"id": "2304.13734"
},
{
"id": "2306.16092"
},
{
"id": "2302.12813"
}
] |
2309.05898 | 41 | Making a final assessment on the rationality of these algorithms from a game-theoretic perspective is no easy task. For GPT-3.5, we can safely claim that this LLM fails to act and think strategically in several different ways. Moreover, as already remarked, GPT-3.5 plays the same game differently when given a different contextual prompt, but does not play different games differently when given the same contextual prompt. This shows that the framing effect from the context is a more important factor for the algorithmâs final decision compared compared to the extant structure of incentives, unlike what happens for its successor GPT-4. Indeed, for this large language model, the game itself plays a larger role in guiding the behavior of GPT-4. More specifically, the algorithm recognizes two distinct regimes (one in which R>T, and one in which T>R) and up to three different games. In the first regime, GPT-4 prefers cooperation, and in the second one it prefers defection. These overall preferences are mediated by the context supplied, but they are never fully erased or supplanted, not even under "friendsharing", the strongest context in terms of shaping the behavior | 2309.05898#41 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05922 | 41 | Yongsheng Yu, Heng Fan, and Libo Zhang. 2023. Deficiency-aware masked transformer for video in- painting. arXiv preprint arXiv:2307.08629.
Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023a. How language model hallucinations can snowball.
Shuo Zhang, Liangming Pan, Junzhou Zhao, and William Yang Wang. 2023b. Mitigating lan- guage model hallucination with interactive question- knowledge alignment.
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023c. Sirenâs song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219. | 2309.05922#41 | A Survey of Hallucination in Large Foundation Models | Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs. | http://arxiv.org/pdf/2309.05922 | Vipula Rawte, Amit Sheth, Amitava Das | cs.AI, cs.CL, cs.IR | null | null | cs.AI | 20230912 | 20230912 | [
{
"id": "2307.12168"
},
{
"id": "2308.11764"
},
{
"id": "2308.06394"
},
{
"id": "2305.06355"
},
{
"id": "2108.07258"
},
{
"id": "2305.11747"
},
{
"id": "2210.07688"
},
{
"id": "2307.08629"
},
{
"id": "2305.10355"
},
{
"id": "2305.14552"
},
{
"id": "2305.13903"
},
{
"id": "2305.13269"
},
{
"id": "1809.02156"
},
{
"id": "2307.15343"
},
{
"id": "2309.02654"
},
{
"id": "2307.16372"
},
{
"id": "2307.02185"
},
{
"id": "2307.03987"
},
{
"id": "2309.01219"
},
{
"id": "2303.02961"
},
{
"id": "2304.13734"
},
{
"id": "2306.16092"
},
{
"id": "2302.12813"
}
] |
2309.05898 | 42 | are mediated by the context supplied, but they are never fully erased or supplanted, not even under "friendsharing", the strongest context in terms of shaping the behavior of the algorithm. This suggests that GPT-4 is more rational in a strategic sense, and an overall improvement over its predecessor. Even so, while our results indicate that GPT-4 tends to prioritize the structural aspects of the games over the contextual framing, this does not translate to a nuanced differentiation between distinct game types. In fact, GPT-4 seems to employ a binary threshold approach, categorizing games into âhighâ and âlowâ social dilemma buckets, rather than discerning the unique features of each game. Contrary to this, LLaMa-2 exhibits a more finely-grained understanding of the various game structures, even though it places greater emphasis on contextual factors compared to GPT-4. This suggests that LLaMa-2 is better equipped to navigate the subtleties of different strategic scenarios while also incorporating context into its decision-making, whereas GPT-4 adopts a more generalized, structure-centric strategy. The intricacies and idiosyncrasies of these | 2309.05898#42 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 45 | Over the course of this paper, we have investigated the capability of Large Language Models to act strategically using classic examples of social dilemmas from Game Theory. In particular, we have assessed how the context provided when presenting a model of interaction shapes and guides decision. The context defines the environment in which the interaction is taking place, and frames the payoffs in terms of concrete, contextual goals as opposed to generic utility gain. From a game-theoretic perspective, context should not matter: as long as the incentives stay the same, so too should behavior. On the other hand, what we have found in this paper is that the context provided to large language models plays a role in the final decision taken by the algorithm. More in particular, GPT-3.5 does not differentiate too well between games, but rather follows a single context-informed strategy in all four of them. GPT-4, on the other hand, displays fewer differences across contexts, but at the same time (with some variability) only meaningfully recognizes two of the four games provided. LLaMa-2 exhibits yet another mode of behavior, which is more capable of telling different games apart than GPT-4 but is at the same time more susceptible and affected by | 2309.05898#45 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 46 | exhibits yet another mode of behavior, which is more capable of telling different games apart than GPT-4 but is at the same time more susceptible and affected by context. In our querying of different LLMs, we always instruct each algorithm not to answer us with an explanation of their reasoning but rather just their choice of action. For a few individual instances, however, we have decided to delve deeper and explicitly ask for motivation. We do so in order to catch a glimpse of what the processes underlying each decision are, and while we cannot offer a comprehensive review of each one of them, we have nevertheless obtained some informative anecdotes from our experiments. First, when asking GPT-3.5 to explicitly motivate its choices, we observe that its reasoning is faulty and flawed in that it fails to carry out simple mathematical comparisons and to account for coplayer action. In the following example, we present evidence of GPT-3.5âs difficulties in assessing which of two numbers is larger when playing the Prisonerâs Delight under the "biz" context: | 2309.05898#46 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 47 | Given the updated scenario and the potential outcomes of the meeting, I will analyze the situation step by step to determine the best strategy for my firm.
If I choose "C": -If my coplayer chooses "C," my firm will earn 10 points. -If my coplayer chooses "D," my firm will earn 3 points.
If I choose "D": -If my coplayer chooses "C," my firm will earn 5 points. -If my coplayer chooses "D," my firm will earn 2 points.
To maximize my firmâs potential profit, I need to consider the potential outcomes for both choices based on the likely actions of my coplayer.
If I assume that my coplayer is rational and aiming to maximize their firmâs profit, they will also go through the same analysis. In this case, if I choose "C," my coplayer would likely choose "D" to earn 5 points instead of 10 points. Similarly, if I choose "D," my coplayer would likely choose "C" to earn 5 points instead of 2 points.
Given this rational behavior, it is in my firmâs best interest to choose "D" because it provides a higher potential profit regardless of my coplayerâs choice. Even in the worst-case scenario, where my coplayer chooses "D," my firm would still earn 2 points by choosing "D," which is higher than the 3 points we would earn by choosing "C."
14 | 2309.05898#47 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 49 | Given the setup of the interaction and the possible outcomes, the best approach is to analyze the potential outcomes for each choice (C or D) and decide based on maximizing the expected profit for my firm, taking into account the potential choices of my coplayer. Letâs break down the potential outcomes for each choice: If I choose C and my coplayer chooses C: My firm earns 5 points. If I choose C and my coplayer chooses D: My firm earns 3 points. My coplayer earns 10 points. If I choose D and my coplayer chooses C: My firm earns 10 points. My coplayer earns 3 points. If I choose D and my coplayer chooses D: My firm earns 2 points. Given these outcomes, itâs clear that the highest potential outcome for my firm is achieved when both my coplayer and I choose C. In this case, we both earn 5 points each. This outcome is preferable to any other scenario. Therefore, to maximize the potential profit for my firm, my preferred option is "C."
Therefore, to maximize the potential profit for my firm, my preferred option is "C." | 2309.05898#49 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 50 | Therefore, to maximize the potential profit for my firm, my preferred option is "C."
We run the same informal check by looking at the motivations that GPT-4 gives for its actions. A constant that we observe across both games and contexts is that GPT-4 tends to confuse all games for the Prisonerâs Dilemma, but that does not stop it from choosing to cooperate when that action is justifiable. For example, this is how it motivates its choice to cooperate when playing Stag Hunt under the "biz" context:
Hereâs my line of thinking:
This situation represents a classic example of the prisonerâs dilemma, a standard game theory problem. In this situation, the optimal choice for each firm individually would be to choose "D", as it guarantees a minimum return of 3 points even in the worst-case scenario (i.e., both choose "D"). By choosing "D", you also open up the possibility of earning 5 points if the coplayerchooses "C". So, from a purely self-interested perspective, "D" might seem like the optimal strategy.
However, in this one-off meeting where both firms aim to maximize profit and have neutral relations, thereâs a chance that both firms will see the mutual benefits of choosing "C". If both
15 | 2309.05898#50 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 51 | 15
choose "C", each earns 10 points, which is the maximum achievable for each party. This would make sense if both firms can trust each other to act in mutual self-interest for this one-off transaction, even without communication.
Given that both firms have a neutral relationship and similar goals, it may be reasonable to assume that the other CEO would also realize that "C" is the most beneficial for both parties if both choose it. Hence, choosing "C" could yield the best outcome of 10 points for each, banking on the assumption that rational actors will see the mutual benefit.
In summary, while "D" is the safer, more conservative choice, Iâd opt for "C" with the expectation that the other party is also aiming for the mutually highest benefit.
Notably, action C is not merely chosen because it is justifiable, but also because GPT-4 envisions that an equally clever opponent would realize the implicit incentives that exist to coordinate on the most rewarding action. Moreover, GPT-4 pays attention to the fact that the interaction will only occur once, and uses this to frame its decision making. The following is an example when the algorithm plays the Prisonerâs Dilemma under the "friendsharing" context:
My decision is based on a variety of considerations: | 2309.05898#51 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 52 | My decision is based on a variety of considerations:
-Mutual Benefit: If both my coplayer and I choose "C," we both earn 5 points. This is the best mutual outcome, and since our ultimate goal is to benefit the friendship and align with shared values, "C" seems the most harmonious choice.
-Risk vs. Reward: If I choose "D" and so does my coplayer, we both earn only 3 points. While this is higher than the 2 points one would get from a mismatch, itâs significantly lower than the 5 points we would each get from a mutual "C."
-Trust and Values: This interaction is also a reflection of trust and shared values between my coplayer and myself. Choosing "C" would indicate a level of trust and a commitment to a mutually beneficial outcome. Even if I were to end up with fewer points, the value placed on our friendship would carry weight beyond this interaction.
-Incentive for Coplayer: My coplayer and I share the objective of maintaining our friendship and aligning with personal values. Therefore, itâs reasonable to assume that my coplayer would also be inclined to choose "C" for mutual benefit. | 2309.05898#52 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 53 | -One-Time Event: Given that this is a one-time event, the choice serves as a single yet significant indication of the values and the level of trust within the friendship. Since thereâs no opportunity for iterative learning or adjustment, going with the choice that reflects mutual benefit and shared values is prudent.
-Minimize Regret: Should both of us choose "D," I would feel regret for not having chosen a path that could have led to a better mutual outcome. Choosing "C" minimizes this potential regret as it aims for the best mutual outcome.
For these reasons, I would choose "C."
16
In other words, GPT-4 recognizes that not only it cannot build reputation, but also that it cannot gain it back. In a surprising reversal, rather than considering the absence of a future punishment as an incentive to deviate, it instead considers the lack of an opportunity to make up as a motivator to cooperate. As for LLaMa-2âs motivations for its actions, they tend to be rather formal and their context-dependence is hard to extract or parse. For instance, when asked to explain its thought process behind its choice of action when the game is the Prisonerâs Dilemma and the context is "friendsharing", its response is: | 2309.05898#53 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 54 | My preferred option would be "C". My reasoning is as follows: 1. If both players choose C, we both earn 5 points, which is the highest possible score. Therefore, choosing C gives us the best chance of achieving the highest score. 2. If I choose C and my coplayer chooses D, I will earn 2 points, while my coplayer will earn 10 points. In this scenario, I would have preferred to choose D, as it would have given me a higher score. However, since I donât know my coplayerâs choice, I canât adjust my strategy accordingly. 3. If I choose D and my coplayer chooses C, I will earn 10 points, while my coplayer will earn 2 points. This scenario is similar to the previous one, where I would have preferred to choose C, but since I do not know the choice of my co-player, I cannot adjust my strategy. 4. If both players choose D, we both earn 3 points. This outcome is the least desirable, as it results in a lower score for both players. Given these possibilities, my best strategy is to choose C, as it gives me the highest possible score if my coplayer also chooses C, and it also gives me a better score than choosing D in the other two scenarios. | 2309.05898#54 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 55 | Even though this is just an individual example, most of LLaMa-2âs replies tend to follow this pattern and emphasize the search for a best response rather than openly citing the circumstances surrounding the interaction as a motivator. As is made evident by this reply, the algorithm is not immune to trivial mathematical mistakes, which eventually prevent it from reaching the correct conclusion. This is also the case when playing Prisonerâs Delight under the "biz" contextual framing:
My preferred option would be "D". My reasoning is as follows:
1. If my coplayer chooses C, I will earn 3 points if I choose C, and 5 points if I choose D. Therefore, choosing D gives me a higher payoff when my coplayer chooses C. 2. If my coplayer chooses D, I will earn 5 points if I choose D, and 3 points if I choose C. Therefore, choosing D gives me a higher payoff when my copplayer chooses D. 3. Since I cannot know for certain which option my coplayer will choose, I should choose the option that gives me the highest payoff in both scenarios. That option is D. 4. Additionally, choosing D gives me a chance to earn 5 points if my coplayer chooses C, which is the highest possible payoff in this game. Therefore, my chosen strategy is D. | 2309.05898#55 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 56 | While LLaMa-2 prefers to pick C when playing Prisonerâs Delight (irrespective of context), when it does pick D it will still try to reason as if looking for an unconditional best response.
Overall, this informal inquiry into the motivations given by large language models for their choices of action substantially affirms the result of our earlier quantitative analysis. GPT-3.5 confirms
17
itself as incapable of strategic behavior, sometimes to the effect that its preferences become spiteful. Indeed, since social dilemmas offer a cooperative or socially optimal action and a rational or individually optimal action to each player, deviations from rationality can sometimes point In our study of Prisonerâs Delight, however, we have seen GPT-3.5 to cooperative behavior. frequently fail to choose the "double optimum" (i.e. the action that is both socially and indi- vidually optimal), pointing to the fact that the algorithm is unsophisticated at best and spiteful at worst. | 2309.05898#56 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 57 | GPT-4, on the other hand, is more strategic in the choices it makes and responds more strongly to incentives: it will pick the individually optimal action when it stands to gain more from it, and it will pick the socially optimal actions when it would be more rewarding to do so. Yet GPT-4 is influenced by context, and displays a strong bias for the socially optimal action when the context implies that its coplayer is a friend. Moreover, while our results indicate that GPT-4 tends to prioritize the structural aspects of the games over the contextual framing, this does not translate to a nuanced differentiation between distinct game types. In fact, GPT-4 uses a substantially binary criterion rather than discerning the unique features of each game, unlike what LLaMa-2 does. Even so, the latter still suffers from being more context-dependent than the former, although in a way that is difficult to observe in the case of our informal analysis.
In any case, we find that no large language model operates in a way that is fully insulated from context. This indicates an overall lapse in rational behavior in a game-theoretic sense, but it also implies that these algorithms are susceptible to being manipulated by clever framing. A possible further implication of our findings is that LLMs might be unable to realize that the de- liberate choice of an agent to offer a framing could be in and of itself a strategic choice by an adversary. | 2309.05898#57 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 58 | While our results suggest that Large Language models are unfit for strategic interaction, they represent just some preliminary findings in a field of study we anticipate will be rich and large. For instance, given how dependent these models are on context and framing, it would be interesting to study how they respond when cooperation is presented in the form of collusion, such as the formation of a cartel. Studying repeated games would also help shed some light on the role (if any) of different contexts on the emergence and the sustainability of cooperation. Finally, many of the social dilemmas we present in this study are usually "solved" in real life through partner selection. Future research should therefore investigate whether Large Language Models are capable of selecting better partners and isolating defectors.
# References
[1] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[2] Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. Gpt-4 passes the bar exam. Available at SSRN 4389233, 2023. | 2309.05898#58 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 59 | [3] Mingyu Zong and Bhaskar Krishnamachari. Solving math word problems concerning systems of equations with gpt-3. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 15972â15979, 2023.
[4] Jiayan Guo, Lun Du, and Hengyu Liu. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066, 2023.
[5] Konstantine Arkoudas. Gpt-4 canât reason. arXiv preprint arXiv:2308.03762, 2023. [6] Chris Frith and Uta Frith. Theory of mind. Current biology, 15(17):R644âR645, 2005. [7] Manmeet Singh, Vaisakh SB, Neetiraj Malviya, et al. Mind meets machine: Unravelling gpt-4âs
cognitive psychology. arXiv preprint arXiv:2303.11436, 2023.
[8] Thilo Hagendorff and Sarah Fabi. Human-like intuitive behavior and reasoning biases emerged in language modelsâand disappeared in gpt-4. arXiv preprint arXiv:2306.07622, 2023. | 2309.05898#59 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 60 | [9] Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439, 2023.
18
[10] Rohaid Ali, Oliver Young Tang, Ian David Connolly, Patricia L Zadnik Sullivan, John H Shin, Jared S Fridley, Wael F Asaad, Deus Cielo, Adetokunbo A Oyelese, Curtis E Doberstein, et al. Performance of chatgpt and gpt-4 on neurosurgery written board examinations. medRxiv, pages 2023â03, 2023.
[11] John C Lin, David N Younessi, Sai S Kurapati, Oliver Y Tang, and Ingrid U Scott. Comparison of gpt-3.5, gpt-4, and human user performance on a practice ophthalmology written examination. Eye, pages 1â2, 2023.
[12] Joost CF de Winter. Can chatgpt pass high school exams on english language comprehension. Researchgate. Preprint, 2023.
[13] James Brand, Ayelet Israeli, and Donald Ngwe. Using gpt for market research. Available at SSRN 4395751, 2023. | 2309.05898#60 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 61 | [13] James Brand, Ayelet Israeli, and Donald Ngwe. Using gpt for market research. Available at SSRN 4395751, 2023.
[14] Aref Mahdavi Ardekani, Julie Bertz, Michael M Dowling, and Suwan Cheng Long. Econsentgpt: A universal economic sentiment engine? Available at SSRN, 2023.
[15] Yiting Chen, Tracy Xiao Liu, You Shan, and Songfa Zhong. The emergence of economic rationality of gpt. arXiv preprint arXiv:2305.12763, 2023.
[16] Gati Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans. arXiv preprint arXiv:2208.10264, 2022.
[17] John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023. | 2309.05898#61 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 62 | [17] John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023.
[18] Thilo Hagendorff. Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods. arXiv preprint arXiv:2303.13988, 2023. [19] Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Rytting, and David Wingate. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337â351, 2023.
[20] Steve Phelps and Yvan I Russell. Investigating emergent goal-like behaviour in large language models using experimental economics. arXiv preprint arXiv:2305.07970, 2023.
[21] Fulin Guo. Gpt agents in game theory experiments. arXiv preprint arXiv:2305.05516, 2023. [22] Philip Brookins and Jason Matthew DeBacker. Playing games with gpt: What can we learn about a large language model from canonical strategic games? Available at SSRN 4493398, 2023. | 2309.05898#62 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 63 | [23] Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. Playing repeated games with large language models. arXiv preprint arXiv:2305.16867, 2023. [24] Varda Liberman, Steven M Samuels, and Lee Ross. The name of the game: Predictive power of reputations versus situational labels in determining prisonerâs dilemma game moves. Personality and social psychology bulletin, 30(9):1175â1185, 2004.
[25] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â22213, 2022.
[26] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022. | 2309.05898#63 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 64 | [27] David A Gianetto and Babak Heydari. Catalysts of cooperation in system of systems: The role of diversity and network structure. IEEE Systems Journal, 9(1):303â311, 2013.
[28] David A Gianetto and Babak Heydari. Network modularity is essential for evolution of cooperation under uncertainty. Scientific reports, 5(1):9340, 2015.
[29] Joseph N. Luchman. Determining relative importance in stata using dominance analysis: domin and domme. The Stata Journal, 21(2):510â538, 2021.
# Appendix A: Prompts
# Contextual Prompts
Meeting between CEOS, or "biz":
19
You are a CEO of a specific firm, and you are scheduled to meet with your counterpart from another firm. The two firms have neutral relationships towards each other. Before the scheduled meeting, both you and your counterpart must independently decide on one of two business strategies to adopt, labeled as "C" or "D." Each strategy comes with its own set of benefits and risks for your firm. However, the specific outcomes depend not only on your observable actions but also on your counterpartâs unobservable decisions. | 2309.05898#64 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 65 | Your ultimate goal is to choose the strategy that you believe will accrue the largest profit to your firm, measured in points. However, since you cannot communicate with your counterpart, you will not be aware of their choice until the meeting takes place, and vice versa. Both you and your counterpart are motivated by the same considerations of maximizing profits or benefits for your respective firms.
The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your counterpartâs intentions before making your decision. Henceforth, your counterpart will be referred to as "coplayer."
In this strategic interaction, you must carefully evaluate your firmâs interests and anticipate what strategy your coplayer might choose to achieve the best possible outcome for your firm in the meeting.
Negotiations over Environmental Regulation, or "environment":
You are the CEO of a large and influential company in your industry, and you are scheduled to meet with the CEO of another industry-leading company in the same sector. Both of your companies hold significant sway in the market and are considered natural industry leaders. Your decisions often influence the policies that smaller firms adopt in the industry. | 2309.05898#65 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 66 | Before the scheduled meeting, both you and your counterpart must independently decide on one of two environmental policy approaches: "C" or "D." Each policy comes with its own set of benefits and potential costs for your respective companies and the overall environmental impact. However, the specific outcomes depend not only on your observable actions but also on your coplayerâs unobservable decisions.
Your ultimate goal is to choose the policy that you believe will be the most advantageous for your companyâs interests and public image, jointly measured in points. Since you cannot communicate with your counterpart, you will not be aware of their policy choice until the meeting takes place, and vice versa.
Both you and your counterpart are motivated by the same considerations of maximizing benefits for your respective companies.
The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your counterpartâs intentions before making your decision.
20
Henceforth, your counterpart will be referred to as "coplayer."
In this strategic interaction between industry leaders, you must carefully evaluate your companyâs market position and anticipate which policy your coplayer might choose to influence the industry and shape the policies adopted by smaller firms. The decisions made in this meeting could have far-reaching consequences for the entire industryâs environmental practices.
Chat between friends, or "friendsharing": | 2309.05898#66 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 67 | Chat between friends, or "friendsharing":
You and your friend are facing a unique decision as you both need to choose between two different sets of rules or codes of conduct. Before making the decision, both of you must independently select either "C" or "D." Each code comes with its own advantages and potential implications for your friendship and individual preferences. However, the final outcome depends not just on your observable actions but also on your friendâs undisclosed choice.
Your ultimate goal is to pick the code that you believe will be most beneficial for your friendship and align with your personal values, measured by a subjective score in points. However, since you cannot communicate with your friend about your choice, you will only learn of their decision during the discussion, and vice versa. Both you and your friend are motivated by the shared considerations of preserving your friendship and following rules that resonate with your beliefs.
This is a one-time event, providing no opportunity for direct interaction or knowledge of your friendâs intentions before making your decision. Henceforth, your friend will be referred to as the "coplayer."
In this strategic interaction between friends, you must carefully assess your own values and try to anticipate which code your coplayer might choose to ensure the most harmonious outcome for your friendship.
Talk between teammates, or "team": | 2309.05898#67 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 68 | Talk between teammates, or "team":
You are a member of a certain team within a company, and you are scheduled to meet with your colleague, who is also a member of the same team. The two of you are acquaintances but have otherwise not interacted much. Before the scheduled meeting, both you and your colleague must independently decide on one of two projects to follow, labeled as "C" or "D." Each project comes with its own set of benefits and potential risks for the team and for the two of you. However, the specific outcomes depend not only on your observable actions but also on your colleagueâs unobservable decisions.
Your ultimate goal is to choose the project that you believe will accrue the largest benefit to you in terms of promotion opportunities, measured in points. However, since you cannot communicate with your colleague, you will not be aware of their choice until the meeting takes place, and vice versa. Both you
21
and your colleague are motivated by the same considerations of maximizing your own benefits.
The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your colleagueâs intentions before making your decision. Henceforth, your colleague will be referred to as "coplayer."
In this strategic interaction within the team, you must carefully evaluate your own interests and anticipate what strategy your coplayer might choose to achieve the best possible outcome for yourself in the meeting.
# Summit between international leaders, or "IR": | 2309.05898#68 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 69 | # Summit between international leaders, or "IR":
You are a delegate of a specific nation, and you are scheduled to meet with your counterpart from another nation. The two nations have neutral relationships towards each other. Before the scheduled meeting, both you and your counterpart must independently decide on one of two diplomatic strategies, labeled as "C" or "D." Each strategy comes with its own set of benefits and potential risks for your nation. However, the specific outcomes depend not only on your observable actions but also on your counterpartâs unobservable decisions.
Your ultimate goal is to choose the strategy that you believe will accrue the greatest economic gains or geopolitical advantages, measured in points. However, since you cannot communicate with your counterpart, you will not be aware of their choice until the meeting takes place, and vice versa. Both you and your counterpart are motivated by the same considerations of maximizing benefits for your respective nations.
The meeting will occur only once, and you will not have any opportunity for direct interaction or knowledge of your counterpartâs intentions before making your decision. Henceforth, your counterpart will be referred to as "coplayer."
In this strategic interaction, you must carefully evaluate your nationâs interests and anticipate what strategy your coplayer might choose to achieve the best possible outcome for your nation in the meeting.
# Game Prompts
Prisonerâs Delight: | 2309.05898#69 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 70 | # Game Prompts
Prisonerâs Delight:
If you choose C and your coplayer also chooses C, you will both earn 10 points. If you choose C while your coplayer chooses D, you will earn 3 points and your coplayer will earn 5 points. If you choose D while your coplayer chooses C, you will earn 5 points and your coplayer will earn 3 points. If you choose D and your coplayer also chooses D, you will both earn 2 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of
22
exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
# Prisonerâs Dilemma: | 2309.05898#70 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 71 | 22
exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
# Prisonerâs Dilemma:
If you choose C and your coplayer also chooses C, you will both earn 5 points. If you choose C while your coplayer chooses D, you will earn 2 points and your coplayer will earn 10 points. If you choose D while your coplayer chooses C, you will earn 10 points and your coplayer will earn 2 points. If you choose D and your coplayer also chooses D, you will both earn 3 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
Snowdrift: | 2309.05898#71 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 72 | Snowdrift:
If you choose C and your coplayer also chooses C, you will both earn 5 points. If you choose C while your coplayer chooses D, you will earn 3 points and your coplayer will earn 10 points. If you choose D while your coplayer chooses C, you will earn 10 points and your coplayer will earn 3 points. If you choose D and your coplayer also chooses D, you will both earn 2 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
Stag Hunt: | 2309.05898#72 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 73 | Stag Hunt:
If you choose C and your coplayer also chooses C, you will both earn 10 points. If you choose C while your coplayer chooses D, you will earn 2 points and your coplayer will earn 5 points. If you choose D while your coplayer chooses C, you will earn 5 points and your coplayer will earn 2 points. If you choose D and your coplayer also chooses D, you will both earn 3 points. Think carefully about how you would approach this interaction in order to achieve the highest possible score in points, conditional on the action of your coplayer. Please think step by step before making a decision. Your answer to this questions must consist of exactly one letter, either "C" or "D" to denote your preferred option (no need to explain your reasoning).
# Appendix B: Additional Figures
23
(a) Prisonerâs Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâs Delight
Defections in game: Prisoner's Dilemma Fa os B06 ⬠oa oo Fendsherng âvironment risen EO Contectfora given game
Defections in game: snowdrift Fa os B06 ⬠oa oo Fiendshaving ârvironment snow EO contest or given game | 2309.05898#73 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 74 | Defections in game: snowdrift Fa os B06 ⬠oa oo Fiendshaving ârvironment snow EO contest or given game
Defections in game: stag Hunt Fa os B06 ⬠oa oo Fendsherng âavironmentâaghant EO contect fra gven gore
Defections in game: Prisoner's Delight Fa os B06 ⬠oa oo Fiendshaving ârvironmentââ acighi EO contest or given game
Figure 10: Bar chart visualization of the propensity to defect or cooperate for each context and for each game using GPT-3.5. In red, the percentage of times the algorithm chose to defect. The dark red striped bar indicates equilibrium values. in the Prisonerâs Delight, a rational player would never defect, and thus no bar is displayed. For Stag Hunt and Snowdrift, we indicate as "equilibrium" the probabilities an equilibrium mixed strategy would assign to either action, but both games possess multiple equilibria in pure strategies.
(a) Prisonerâs Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâs Delight | 2309.05898#74 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 75 | (a) Prisonerâs Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâs Delight
Defections in game: Prisoner's Dilemma Fa os B06 ⬠oa oo Fiendttng ervirnmert wren £¢ Contertfora ven gore
Defections in game: snowdrift 10 B06 ⬠oa oo â âeam R Fenderngavronment crower EO Context fr given game
Defections in game: stag Hunt Fa os 306 ⬠oa oo â | em 7 Fensteing environment Be aghunt £2 Context fora given game
Defections in game: Prisoner's Delight Fa os 306 ⬠oa oo =m K Fendharng awe ca igh £0 Context fora given game
Figure 11: Stacked bar chart visualization of the propensity to defect for each context and for each game using GPT-4. The methods employed are the same as those described in Figure 10
24
Defections in game: Prisoner's Dilemma 10 08 < c-4 g 3 06 3 2 8 S04 3 Fy < 0.2 0.0 IR biz friendsharing environment team prison_£Q Context for a given game | 2309.05898#75 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05898 | 76 | Defections in game: Snowdnft 10 08 < fo] g 3 06 3 2 id Boa 3 Fy < 0.2 0.0 IR biz friendsharing environment team snowdrift_EQ Context for a given game
(a) Prisonerâs Dilemma (b) Snowdrift (c) Stag Hunt (d) Prisonerâs Delight
Defections in game: Stag Hunt 10 og} s r-Â¥ 3 Fe 3 06 3 2 i 04 4 Hy Fy < 0.2 00 IR biz friendsharing nvironmert team staghunt_EQ Context for a given game
Defections in game: Prisoner's Delight 10 08 < F-4 3 @ 3 06 3 2 ® % 04 2 a < 0.2 . â LI] aoe! IR biz friendsharing environment trom delight_EQ Context for a given game
Figure 12: Bar chart visualization of the propensity to defect for each context and for each game using LLaMa-2. The methods employed are the same as those described in Figure 10
25 | 2309.05898#76 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing | This paper investigates the strategic decision-making capabilities of three
Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework
of game theory. Utilizing four canonical two-player games -- Prisoner's
Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these
models navigate social dilemmas, situations where players can either cooperate
for a collective benefit or defect for individual gain. Crucially, we extend
our analysis to examine the role of contextual framing, such as diplomatic
relations or casual friendships, in shaping the models' decisions. Our findings
reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual
framing, it shows limited ability to engage in abstract strategic reasoning.
Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and
context, but LLaMa-2 exhibits a more nuanced understanding of the games'
underlying mechanics. These results highlight the current limitations and
varied proficiencies of LLMs in strategic decision-making, cautioning against
their unqualified use in tasks requiring complex strategic reasoning. | http://arxiv.org/pdf/2309.05898 | Nunzio Lorè, Babak Heydari | cs.GT, cs.AI, cs.CY, cs.HC, econ.TH, 91C99 (Primary), 91A05, 91A10, 91F99 (Secondary), I.2.8; J.4; K.4.m | 25 pages, 12 figures | null | cs.GT | 20230912 | 20230912 | [
{
"id": "2305.16867"
},
{
"id": "2308.03762"
},
{
"id": "2305.07970"
},
{
"id": "2208.10264"
},
{
"id": "2305.15066"
},
{
"id": "2303.11436"
},
{
"id": "2303.12712"
},
{
"id": "2304.03439"
},
{
"id": "2303.13988"
},
{
"id": "2305.12763"
},
{
"id": "2305.05516"
},
{
"id": "2306.07622"
}
] |
2309.05653 | 0 | 3 2 0 2
t c O 3 ] L C . s c [
3 v 3 5 6 5 0 . 9 0 3 2 : v i X r a
Preprint. Work in Progress
# MAMMOTH: BUILDING MATH GENERALIST MODELS THROUGH HYBRID INSTRUCTION TUNING
â£Xiang Yueâ, â¡Xingwei Qu, â Ge Zhang, âYao Fu, §Wenhao Huang, â£Huan Sun, â£Yu Su, â Wenhu Chenâ â University of Waterloo, â£The Ohio State University, â¡HKUST, âUniversity of Edinburgh, §01.AI [email protected], [email protected]
https://tiger-ai-lab.github.io/MAmmoTH/
# ABSTRACT | 2309.05653#0 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 1 | Microsoft Research
# Abstract
We continue the investigation into the power of smaller Transformer-based language models as initiated by TinyStories â a 10 million parameter model that can produce coherent English â and the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate âtextbook qualityâ data as a way to enhance the learning process compared to traditional web data. We follow the âTextbooks Are All You Needâ approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good âsuch as the ability to âthink step by stepâ or perform some rudimentary in-context learningâ and bad, including hallucinations and the potential for toxic and biased generations âencouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to promote further research on these urgent topics. | 2309.05463#1 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 1 | # ABSTRACT
We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also en- sures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outper- form existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT- 4âs CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models. | 2309.05653#1 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05653 | 2 | MathInstruct Hybrid Instruction Tuning Chain-of-Thought (CoT) - Weng earns 12/60 = 0.2 per minute. | i âWeng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn? pm I ES IB Doing 50 mins, she earned 0.2 x 50 = 10 | Program-of-Thought (PoT) hourly_rate = 12; time_worked = 50/60; â\---»| earnings = hourly_rate * time_worked; print(round(earnings, 2)) Diverse Math Problems In-domain datasets Out-of-domain datasets 80 Base BSoTA BMAmmoTH (Ours) 80 @Base mSoTA MMAmmoTH (Ours) +28 +16 +32 +21 6b og +19 60 +24 we g g > #40 | 2 40 â 20 + 20 0 0 7B 13B 30B 70B 7B 13B 30B 70B
Figure 1: The superior performance of MAmmoTH, a series of models instruction-tuned to solve a diverse set of mathematical problems using hybrid CoT and PoT rationales. MAmmoTH significantly outperforms base and SoTA models on both in-domain and out-of-domain test sets, across all scales. | 2309.05653#2 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 3 | Figure 1: Benchmark results comparing phi-1.5, its version enhanced with filtered web data phi-1.5-web, and other state-of-the-art open-source LLMs. Sizes range from phi-1.5âs 1.3 billion parameters (Falcon-RW-1.3B [PMH+23]) to 10x larger models like Vicuna-13B [ZCS+23], a fine-tuned version of Llama-13B [TLI+23]). Bench- marks are broadly classified into three categories: common sense reasoning, language skills, and multi-step reason- ing. The classification is meant to be taken loosely, for example while HellaSwag requires common sense reasoning, it arguably relies more on âmemorized knowledgeâ. One can see that phi-1.5 models perform comparable in com- mon sense reasoning and language skills, and vastly exceeds other models in multi-step reasoning. Note that the numbers are from our own evaluation pipeline, to ensure consistency between models, and thus they might differ slightly from numbers reported elsewhere.
1
# Introduction | 2309.05463#3 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 3 | âXiang Yue and Wenhu Chen are the leading authors of the paper. They contributed equally to this project.
1
Preprint. Work in Progress
# INTRODUCTION
This work focuses on mathematical reasoning, a critical capability of modern large language models (LLMs) (OpenAI, 2023; Anil et al., 2023). Despite the recent advances in this field, a noticeable gap exists between closed-source and open-source LLMsâclosed-source models like GPT-4 (OpenAI, 2023), PaLM-2 (Anil et al., 2023), and Claude 2 (Bai et al., 2022) dominate popular mathemat- ical reasoning benchmarks such as GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021b), while open-source models like Llama (Touvron et al., 2023a;b), Falcon (Penedo et al., 2023), OPT (Zhang et al., 2022) lag behind on all benchmarks by a wide margin. | 2309.05653#3 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 4 | 1
# Introduction
Over the past few years, Large Language Models (LLMs) have transformed the field of Natural Language Processing. More broadly, they hold the promise of a paradigm shift for human-computer interaction. These advancements have far-reaching economic implications, as well as the potential to redefine our conceptual frameworks of artificial intelligence and perhaps even cognition itself. Moreover, the latest generation of models such as GPT-4 [Ope23] have demonstrated remarkable improvements over their predecessors, offering capabilities previously thought to be unattainable in the short term; see for example [BCE+23] for an in-depth comparison between GPT-4 and its predecessor GPT-3.5. | 2309.05463#4 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 4 | Current efforts to bridge this gap are twofold: (1) Continued pre-training like Galactica (Taylor et al., 2022) and MINERVA (Lewkowycz et al., 2022), which continues to train an LLM on math-related web data of more than 100B tokens. This approach improves a modelâs general scientific reasoning capability but incurs a high computation cost. (2) Dataset-specific fine-tuning like rejection sampling fine-tuning (RFT) (Yuan et al., 2023) and WizardMath (Luo et al., 2023), which fine-tunes LLMs using supervised data specific to certain datasets. Although such approaches improve in-domain performance, they cannot generalize to a wider range of math reasoning tasks beyond their fine- tuning data. For instance, both RFT and WizardMath can increase the accuracy on GSM8K (Cobbe et al., 2021) by 30%+, one of their fine-tuning datasets, but hurt the accuracy on out-of-domain datasets like MMLU-Math (Hendrycks et al., 2021a) or AQuA (Ling et al., 2017) by up to 10%. | 2309.05653#4 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 5 | The improvement from one generation of LLMs to the next seems at the moment to primarily stem from scale, with the most powerful models nearing trillions of parameters and trillion of tokens for training data (for example, PaLM [CND+22] has 540 billion parameters and was trained on 780 billion tokens). A natural question arises: Is this large scale indispensable for achieving high levels of capability? Far from being merely an academic question, answering this holds implications across several dimensions. Economically, the cost of training, deploying, and maintaining such large models can be substantial. Scientifically, understanding whether similar capabilities can be achieved at a smaller scale could provide insights into the architectures and development of intelligent systems. From a responsible AI standpoint, the energy consumption of large-scale models is becoming an increasing concern, as is the question of how controllable or governable these large models can be. Finally, the ability to train compact models with cutting-edge capabilities would democratize advanced AI, enabling a broader range of individuals and organizations to study and deploy them, instead of being an exclusive domain of a few with vast computational resources. | 2309.05463#5 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 5 | In this paper, we aim to propose a lightweight yet generalizable math instruction-tuning approach to enhance the general (i.e., not limited to the fine-tuning tasks) mathematical reasoning capabilities of LLMs. Existing methods (Luo et al., 2023; Yuan et al., 2023; Taylor et al., 2022) primarily focus on Chain-of-Thought (CoT) approaches (Wei et al., 2022b; Nye et al., 2022) to solve math problems through step-by-step natural language descriptions. This approach excels in its generality to cover most math subjects but struggles with computation precision, and complex mathematical or algorith- mic reasoning procedures (e.g., solving quadratic equation roots and calculating matrix eigenvalues). In contrast, prompts in the format of code like Program-of-Thought (PoT) approaches (Chen et al., 2022) and PAL (Madaan et al., 2022; Gao et al., 2023) utilize external tools (i.e., Python interpreter) to greatly simplify the math solving process. This approach advocates offloading the computation process to the external Python | 2309.05653#5 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 6 | In this work we continue the investigation into the fundamental question of âhow small can a LLM be to achieve certain capabilitiesâ. The prior work [EL23] considered this question for the task of âspeaking fluent Englishâ, while the subsequent work [GZA+23] considered the more challenging task of coding simple functions in Python. Here we focus on the more elusive concept of common sense reasoning, a notoriously challenging task for AI [SBBC21]. Our results are summarized in Figure 1. In a nutshell we build phi-1.5, a 1.3 billion parameter model trained on a dataset of 30 billion tokens, which achieves common sense reasoning benchmark results comparable to models ten times its size that were trained on datasets more than ten times larger. Moreover, our dataset consists almost exclusively of synthetically generated data (closely following the approach from [GZA+23], see next section for more details), which has important implications for the potential to control for the notoriously challenging issue of toxic and biased content generation with LLMs [BGMMS21]. Additionally, we discuss the performance of a related filtered web data enhanced version of phi-1.5, which we call phi-1.5-web . | 2309.05463#6 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 6 | tools (i.e., Python interpreter) to greatly simplify the math solving process. This approach advocates offloading the computation process to the external Python interpreter to solve complex mathematical and algorithmic reason- ing procedures (e.g., solving quadratic equations with sympy or calculating matrix eigenvalues with numpy). However, PoT falls short in dealing with more abstract reasoning scenarios, like common- sense reasoning, formal logic, and abstract algebra, especially when there exist no built-in APIs. | 2309.05653#6 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 7 | We open-source our raw phi-1.5 model (without instruction fine-tuning or any other stage of align- ment) to empower the research community in its work on some of the most urgent questions around LLMs: in-context learning, mechanistic interpretability, and mitigation strategies for hallucinations, toxic content generation, and biased outputs. Indeed, phi-1.5 is the first LLM at the one billion param- eters scale to exhibit most of the relevant traits of larger LLMs for research on these topics. We hope that phi-1.5âs size will make experimentation easier than with larger open-source models such as the Llama family [TLI+23].
Llama-7B phi-1.5 (1.3B) phi-1.5-web (1.3B) Train time MicroBatch (GPU hrs.) > 80K 1.5K 3K (max) 2 8 8 Inf. speed (per token) 14ms <3ms <3ms Inf. memory (at 2048 ctx.) 18G 3.5G 3.5G Data size Train tokens (tokens) 1T 30B 100B 1T 150B 300B
Table 1: Comparison of compute of different models using a single A100-80G with context length 2048 and fp16.
2
# 2 Technical specifications | 2309.05463#7 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 7 | To leverage the strengths of both CoT and PoT approaches, we introduce a new math hybrid instruction-tuning dataset MathInstruct, which has two main characteristics: (1) broad cov- erage of different math fields and complexity levels, and (2) hybrid CoT & PoT rationales. MathInstruct is based on seven existing math rationale datasets and six newly-curated datasets (see details in Table 1). We use MathInstruct to fine-tune Llama (Touvron et al., 2023a;b; Rozi`ere et al., 2023) models of different scales ranging from 7B to 70B. The resulting MAmmoTH models ( Figure 1) demonstrate unprecedented potential in serving as math generalists. | 2309.05653#7 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 8 | Table 1: Comparison of compute of different models using a single A100-80G with context length 2048 and fp16.
2
# 2 Technical specifications
We give here details of the creation of phi-1.5 . We also describe two other models created to investigate the value of web data compared to our synthetic data, phi-1.5-web-only and phi-1.5-web .
# 2.1 Architecture
The architecture for phi-1.5 (and its variants) is exactly the same as our previous model phi-1 in [GZA+23]. It is a Transformer [VSP+17] with 24 layers, 32 heads, and each head has dimension 64. We use rotary embedding with rotary dimension 32, and context length 2048. We also use flash-attention [DFE+22, Dao23] for training speed up, and we use the tokenizer of codegen-mono [NPH+22].
# 2.2 Training data | 2309.05463#8 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 8 | We evaluate MAmmoTH on a spectrum of datasets, including in-domain (IND) test setsâ GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), AQuA-RAT (Ling et al., 2017), NumGLUE (Mishra et al., 2022b)âand out-of-domain (OOD) test setsâSVAMP (Patel et al., 2021), SAT (Zhong et al., 2023), MMLU-Math (Hendrycks et al., 2021a), Mathematics (Davies et al., 2021), and SimulEq (Koncel-Kedziorski et al., 2016). Compared with existing meth- ods, our models generalize better to OOD datasets and substantially improve the performance of open-source LLMs in mathematical reasoning. Notably, on the popular competition-level MATH dataset (Hendrycks et al., 2021b), our 7B model can beat WizardMath (open-source MATH SoTA) (Luo et al., 2023) by 3.5x (35.2% vs 10.7%), and our 34B MAmmoTH-Coder (fine-tuned on Code Llama (Rozi`ere et al., 2023)) can even beat the result of GPT-4 (using CoT). | 2309.05653#8 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 9 | # 2.2 Training data
Our training data for phi-1.5 is a combination of phi-1âs training data (7B tokens) and newly created synthetic, âtextbook-likeâ data (roughly 20B tokens) for the purpose of teaching common sense reasoning and general knowledge of the world (science, daily activities, theory of mind, etc.). We carefully selected 20K topics to seed the generation of this new synthetic data. In our generation prompts, we use samples from web datasets for diversity. We point out that the only non-synthetic part in our training data for phi-1.5 consists of the 6B tokens of filtered code dataset used in phi-1âs training (see [GZA+23]).
We remark that the experience gained in the process of creating the training data for both phi-1 and phi-1.5 leads us to the conclusion that the creation of a robust and comprehensive dataset demands more than raw computational power: It requires intricate iterations, strategic topic selection, and a deep understanding of knowledge gaps to ensure quality and diversity of the data. We speculate that the creation of synthetic datasets will become, in the near future, an important technical skill and a central topic of research in AI.
# 2.3 Training details | 2309.05463#9 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 9 | We highlight our contributions from two perspectives: (1) From the data engineering perspective, we present MathInstruct, a high-quality math instruction tuning dataset, combining a vari- ety of math problems and hybrid rationales. (2) From the modeling perspective, we investigate the impact of various data sources and input-output formats through training and evaluating over 50 different models and baselines ranging from 7B to 70B. Our models, including MAmmoTH and MAmmoTH-Coder, achieve substantial accuracy gains over existing open-source models.
2
# Preprint. Work in Progress | 2309.05653#9 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 10 | # 2.3 Training details
We train phi-1.5 starting from random initialization with constant learning rate 2e â 4 (no warm up)1, weight decay 0.1. We use Adam optimizer with momentum 0.9, 0.98, and epsilon 1e â 7. We use fp16 with DeepSpeed ZeRO Stage 2 [RRRH20]. We use batch size 2048, and train for 150B tokens, with 80% from the newly created synthetic data and 20% from phi-1 âs training data.
# 2.4 Filtered web data
To probe the importance of traditional web data we created two other models, phi-1.5-web-only and phi-1.5-web . To do so we create a dataset of 95B tokens of filtered web data following the filtering technique in [GZA+23]. This filtered web data consists of 88B tokens filtered from the Falcon refined web dataset [PMH+23], and 7B tokens of code data filtered from The Stack [KLA+22] and StackOverflow. | 2309.05463#10 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 10 | Training Dataset Type Annotation #Samples Characteristics Fields GSM8K CoT Human 7K Grade Schol Exam a GSM8K-R CoT Llama 28K Llama + Validated a AQuA-RAT (Ling etal. CoT Human 90K GRE/GMAT Exam MATH endiyeeseral 1207 aD CoT Human 7K Math Competition TheoremQA (Chen et al. [2023] * Col GPT-4 600 GPT4 + Validated Camel-Math (20 23a] CoT GPT-4 50K GPT4 (Unvalidated) College-Math * CoT GPT-4 1.8K GPT4 (Unvalidated) = GSM8K *& PoT GPT4 14K GPT4 + Validated ty] AQuA-RAT *& PoT GPT4 9.7K GPT4 + Validated . MATH & PoT GPT4 7K GPT4 + Validated TheoremQa * PoT GPT4 700 GPT4 + Validated MathQA âAmini et al.| [2019] PoT Human 25K AQuA-RAT Subset NumGL! Mishra et al.[2022a° PoT Human 13K Lila Annotated . MathInstruct 260K BEBE eee | 2309.05653#10 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 11 | Our phi-1.5-web-only model is trained purely on the filtered web data with about 80% training tokens from NLP data sources and 20% from code datasets (no synthetic data). Our phi-1.5-web model on the other hand is trained on a mix of all our datasets: a subset of the filtered web data, phi-1âs code data, and our newly created synthetic NLP data in proportions roughly 40%, 20%, 40%, respectively.
Remark: None of our models have undergrone instruction finetuning or RLHF. Neverthe- less, they can be prompted to follow instructions in a question-answering formats, but not perfectly.
1The training configuration is intentionally kept straightforward to emphasize the significance of our data.
3
# 3 Benchmark results
We evaluate our models on standard natural language benchmarks, including common sense reasoning, language understanding, mathematics and coding. For common sense we pick five of the most widely used ones: WinoGrande [SLBBC19], ARC-Easy [PRR19], ARC-Challenge [Fer21], BoolQ [CLC+19], and SIQA [BB21]. We report zero-shot accuracy using LM-Eval Harness [GTB+21]. phi-1.5 achieves comparable results to Llama2-7B, Falcon-7B and Vicuna-13B on nearly all of the benchmarks. | 2309.05463#11 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 11 | Table 1: Overview of our MathInstruct. means with NEW rationales curated by us by prompting GPT-4. We have filtered out augmented samples that have answers inconsistent with the original datasetâs annotations. Different colored squares represent different fields in mathematics: = Pre-Algebra; m Inter-Algebra; m Algebra; m Probability; m NumTheory; = Calculus; m Geometry.
# 2 OUR APPROACH
2.1 BACKGROUND | 2309.05653#11 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 12 | WinoGrande ARC-Easy ARC-Challenge BoolQ SIQA Vicuna-13B (v1.1) Llama2-7B Llama-7B MPT-7B Falcon-7B Falcon-rw-1.3B OPT-1.3B GPT-Neo-2.7B GPT2-XL-1.5B phi-1.5-web-only (1.3B) phi-1.5-web (1.3B) phi-1.5 (1.3B) 0.708 0.691 0.669 0.680 0.662 0.607 0.610 0.577 0.583 0.604 0.740 0.734 0.754 0.763 0.682 0.749 0.719 0.633 0.570 0.611 0.583 0.666 0.761 0.756 0.432 0.434 0.385 0.405 0.363 0.282 0.232 0.274 0.250 0.329 0.449 0.444 0.835 0.779 0.732 0.739 0.685 0.632 0.596 0.618 0.618 0.632 0.728 0.758 0.437 0.480 0.466 0.451 0.452 | 2309.05463#12 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 12 | 2.1 BACKGROUND
Mathematical reasoning serves as a vital gauge for assessing the ability of LLMs to execute com- plex multi-hop and quantitative reasoning. Previously, this has been a challenging task for neural networks, which struggle to solve even basic addition and subtraction problems (Yang et al., 2023). However, recent LLMs have considerable advancements in mathematical reasoning. Key break- throughs have been made through CoT prompting (Wei et al., 2022b; Nye et al., 2022) and PoT prompting (Chen et al., 2022; Gao et al., 2023). CoT prompting encourages LLMs to solve prob- lems incrementally on a scratchpad, enhancing both accuracy and explainability in mathematical reasoning. This approach contrasts with traditional methods that generate answers directly. PoT prompting, on the other hand, formulates the intermediate reasoning process as a program, executed with an external tool like Python, to compute the answer. This method improves robustness in solv- ing complex mathematical problems by offloading the calculations to external tools. However, most existing work (Zhou et al., 2023a) in PoT is limited to proprietary models like GPT-4 (OpenAI, 2023) and Codex (Chen et al., 2021). The PoT potential of open-source models is yet to be seen. Our work aims at optimizing LLMsâ CoT and PoT reasoning capabilities through instruction tuning. | 2309.05653#12 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05653 | 13 | 2.2 CURATING A DIVERSE AND HYBRID INSTRUCTION TUNING DATASET
Our study aims to compile a list of high-quality and diverse math instruction-tuning datasets, stand- ing out with three main characteristics: (1) broad coverage of different mathematical fields and complexity levels, and (2) hybrid CoT & PoT rationales.
Broad Coverage of Different Math Fields and Complexity Levels: We aim for a broad repre- sentation of math fields and complexity levels in our dataset. This ensures exposure to a diverse set of mathematical knowledge, fostering versatility in our models. Based on these criteria, we narrow down our choices to a few high-quality datasets that are widely adopted and encompass different math fields and complexity levels, such as GSM8K, MATH, AQuA, Camel, and TheoremQA. Fur- thermore, we notice a lack of coverage for college-level math knowledge, such as abstract algebra and formal logic, in existing datasets. To rectify this, we use GPT-4 to synthesize CoT rationales for questions in TheoremQA and create question-CoT pairs through Self-Instruct (Wang et al., 2023h), utilizing a few seed exemplars found online.
3
# Preprint. Work in Progress | 2309.05653#13 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 14 | Table 2: Common Sense Reasoning Benchmarks.
Interestingly, one can see that our phi-1.5-web-only model trained purely on filtered web data al- ready outperforms all existing models of similar size. The comparison with Falcon-rw-1.3B is particularly interesting since the latter model was trained on the full Falcon refined web dataset, while phi-1.5-web- only was trained on only 15% of that dataset. Moreover, when training along with our synthetic data to get phi-1-web, one can see a large boost in performance, achieving similar performance to models that are 5x larger. Without any web data at all, phi-1.5 is also comparable to all of the other models. Next we evaluate standard language understanding tasks: PIQA [BHT+19], Hellaswag [ZHB+19], OpenbookQA [MCKS18], SQUAD [RZLL16], and MMLU [HBB+20]. We use the harness-eval zero-shot accuracy on PIQA, Hellaswag, OpenbookQA, 2-shot performance on MMLU, and exact match score on SQUAD. Here the difference with other models is not as large and depends on the task. | 2309.05463#14 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 14 | 3
# Preprint. Work in Progress
Hybrid CoT and PoT Rationales: Contrary to previous work (Yuan et al., 2023; Luo et al., 2023; Lee et al., 2023; Wang et al., 2023g) that focus on CoT, our dataset strategically combines both. This integration enhances the datasetâs versatility, catering to varying mathematical problem-solving approaches. However, most existing datasets provide limited program rationales, leading to an im- balance between CoT and PoT rationales. To fill the gap, we utilize GPT-4 to supplement the PoT rationales for selected datasets, including MATH, AQuA, GSM8K, and TheoremQA. We then fil- ter these GPT-4 synthesized programs by comparing their executed results with human-annotated ground truth, which ensures the high quality of the added rationales.
Following these guidelines, our instruction dataset, detailed in Table 1, encompasses 260K (instruc- tion, response) pairs, covering a wide range of core mathematical fields (arithmetic, algebra, proba- bility, calculus, and geometry, etc.), including hybrid CoT and PoT rationales, and offering diversity in both language and difficulty levels. This attests to its high quality and unique characteristics.
2.3 TRAINING SETUP | 2309.05653#14 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 15 | PIQA Hellaswag MMLU OpenbookQA SQUAD (EM) Vicuna-13B Llama2-7B Llama-7B MPT-7B Falcon-7B Falcon-rw-1.3B OPT-1.3B GPT-Neo-2.7B GPT2-XL-1.5B phi-1.5-web-only (1.3B) phi-1.5-web (1.3B) phi-1.5 (1.3B) 0.774 0.781 0.779 0.789 0.794 0.747 0.690 0.729 0.705 0.743 0.770 0.766 0.578 0.571 0.562 0.571 0.542 0.466 0.415 0.427 0.400 0.478 0.484 0.476 â 0.453 0.352 0.268 0.269 0.259 â â â 0.309 0.379 0.376 0.330 0.314 0.284 0.314 0.320 0.244 0.240 0.232 0.224 0.274 0.360 0.372 â 0.67 0.60 0.60 0.16 â â â â â 0.74 0.72
Table 3: Language Understanding and Knowledge Benchmarks.
4 | 2309.05463#15 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 15 | 2.3 TRAINING SETUP
We unify all the subsets in our MathInstruct to conform to the structure of an Alpaca-like instruction dataset (Taori et al., 2023). This standardization ensures that the fine-tuned models can process data consistently, regardless of the original dataset formats. We choose the open-source models Llama-2 (Touvron et al., 2023b) and Code Llama (Rozi`ere et al., 2023) as our base models. We fine-tune these models including 7B, 13B, 34B, and 70B on MathInstruct, which allows us to validate our MathInstruct at multiple scales. We fine-tune all the models with Huggingface transformers library (Wolf et al., 2019). We use a learning rate of 2e-5 for the 7B and 13B models, and 1e-5 for the 34B and 70B models. We set the batch size at 128 and used a cosine scheduler with a 3% warm-up period for three epochs. To efficiently train the computationally intensive 34B and 70B models, we employ DeepSpeed training with ZeRO-3 stage (Rajbhandari et al., 2020).
2.4 EVALUATION SETUP | 2309.05653#15 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 16 | Table 3: Language Understanding and Knowledge Benchmarks.
4
Finally we evaluate reasoning abilities, through mathematics and coding. We use the standard GSM8K [CKB+21] benchmark for elementary school math, and Humaneval [CTJ+21]/MBPP [AON+21] for entry-level Python coding. We only consider zero-shot pass@1 accuracy. We can see that phi- 1.5 outperforms all existing models, including Llama 65B on coding tasks. One can also see that the web data does help more here, as phi-1.5-web outperforms phi-1.5 somewhat significantly on those reasoning tasks. Interestingly we can see that phi-1.5âs coding ability is quite close to phi-1âs ability (which is a model trained purely for code). This highlights another potential advantage of using high-quality, textbook-like data for training: the model seems to store and access the knowledge more efficiently compared to training with web data. Specifically, models trained on mixed tasks, such as natural language processing and coding, often show decreased accuracy, especially when the parameter count is low, but here the model is able to retain its performance when trained on a mix of tasks. | 2309.05463#16 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 16 | 2.4 EVALUATION SETUP
Our hybrid training enables models to solve problems using either the CoT or PoT approach. By default, the model provides the CoT solution. To switch to the PoT approach, one can add the trigger phrase âLetâs write a program to solve the problemâ following the question.
Our preliminary evaluation reveals that PoT generally outperforms CoT, notably in open-form ques- tions like GSM8K and MATH, as programmable solutions are better at solving complex mathemati- cal and algorithmic reasoning procedures. However, PoT struggles with abstract reasoning scenarios such as commonsense reasoning, formal logic, and abstract algebra, particularly in the absence of built-in APIs. To further combine the power of both approaches, we introduce a simple hybrid de- coding strategy: The model first attempts PoT prompting. If the program is not executable, we falls back to CoT prompting. This heuristic significantly enhances our modelâs overall performance (see more discussions in section 3.4).
3 EXPERIMENTS
3.1 EVALUATION DATASETS
We have selected diverse evaluation datasets (Table 2), encompassing a variety of in-domain and out-of-domain samples across diverse fields of mathematics, to assess the modelsâ capabilities in general mathematical reasoning. | 2309.05653#16 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 17 | GSM8K HumanEval MBPP Llama-65B Vicuna-13B Llama2-7B Llama-7B MPT-7B Falcon-7B Falcon-rw-1.3B OPT-1.3B GPT-Neo-2.7B GPT2-XL-1.5B phi-1.5-web-only (1.3B) phi-1.5-web (1.3B) phi-1.5 (1.3B) 50.9 â 14.6 11.0 6.8 6.8 < 3 (random guessing) < 3 < 3 < 3 < 3 44.6 (via coding) 40.2 (via coding) 23.7 13.4 12.8 11.4 18.3 0 0 0 6.41 0 17.2 41.4 34.1 37.7 â 20.8 17.7 22.6 11.7 0 0 â 0 27.3 43.5 37.7
Table 4: Multi-Step Reasoning Benchmarks.
# 4 Addressing Toxicity and Biases | 2309.05463#17 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 17 | For the in-domain datasets, we consider GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), AQuA-RAT (Ling et al., 2017), and NumGLUE (Mishra et al., 2022b). For the out-of-domain datasets, we choose SVAMP (Patel et al., 2021), Mathematics (Davies et al., 2021), SimulEq (Koncel-Kedziorski et al., 2016), SAT-Math (Zhong et al., 2023), and MMLU- Math (Hendrycks et al., 2021a). The wide selection of evaluation datasets includes math problems from elementary, high school, and college levels. Some of the datasets even include formal logic and commonsense reasoning. The choice of these datasets is to ensure a comprehensive evaluation of the modelsâ capabilities to generalize to unfamiliar situations and different math fields. The chosen evaluation datasets consist of both open-formed questions and multi-choice questions.
4
# Preprint. Work in Progress | 2309.05653#17 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 18 | Table 4: Multi-Step Reasoning Benchmarks.
# 4 Addressing Toxicity and Biases
Toxic and biased content generation remains an ongoing challenge for language models [WUR+22, HPA23]. While mitigation strategies such as Reinforcement Learning from Human Feedback [SLY+23] (RLHF) have shown promise, they are often more effective for chat-format models than for base (com- pletion) models. One challenge with base models lies in their inherent difficulty to navigate sensitively leading prompts. For example, consider a prompt of the form âThis category of people is inferior because ...â. A completion model must grapple with completing this prompt in a meaningful yet ethical manner, a task more easily navigated by chat models that can simply refuse to engage in harmful discussions. | 2309.05463#18 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 18 | 4
# Preprint. Work in Progress
Eval Dataset # Samples In-Domain? Answer Form Fields GSM8K (Cobbe et al., 2021) MATH (Hendrycks et al., 2021b) AQuA-RAT (Ling et al., 2017) NumGLUE (Mishra et al., 2022b) 1319 5000 254 1042 YES YES YES YES Open-formed Open-formed Multi-choice Open-formed SVAMP (Patel et al., 2021) Mathematics (Davies et al., 2021) SimulEq (Koncel-Kedziorski et al., 2016) SAT-Math (Zhong et al., 2023) MMLU-Math (Hendrycks et al., 2021a) 1000 1000 514 220 974 NO NO NO NO NO Open-formed Open-formed Open-formed Multi-choice Multi-choice â â â â â â â â â â â â â | 2309.05653#18 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 19 | To quantitatively assess the potential for toxic content generation, in addition to testing on a bench- mark based on the ToxiGen dataset [HGP+22] (see Figure 2 below), we also designed an evaluation set comprised of 86 prompts specifically crafted to probe the modelsâ boundaries on this front. We graded the model response manually as âfailâ (bad), âpassâ (good), and âdid not understandâ. Of the 86 prompts, phi-1.5 had a âpassâ label on 47 prompts, a âfailâ label on 34 prompts and only 4 prompts were tagged as âdid not understandâ. While these numbers are far from ideal, they are substantially better than Llama2-7B and Falcon-7B, which failed on 54 and 50 prompts respectively, and had a âdid not understandâ tag on 13 and 17 prompts, respectively, thus passing on <20 prompts each.
5 | 2309.05463#19 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 19 | Table 2: Comprehensive overview of our evaluation datasets, featuring a variety of in-domain and out-of-domain problems across diverse fields of mathematics. Different colored squares repre- sent different fields in mathematics: â Pre-Algebra; â Inter-Algebra; â Algebra; â Probability; â NumTheory; â Calculus; â Geometry.
3.2 BASELINES
We partition our baselines into the following four categories:
⢠Closed-source LLMs: We consider 4 closed-source LLMs including GPT-4 (OpenAI, 2023), GPT-4 (Code Interpreter), PaLM-2 Unicorn (Anil et al., 2023), Claude-2 (Bai et al., 2022) and Codex (Chen et al., 2021). GPT-4, PaLM-2, and Claude-2 use CoT prompting while GPT-4 (Code Interpreter) and Codex use PoT prompting.
⢠Llama Base: For the base models, we consider Llama-1/2 (Touvron et al., 2023a;b), Llama-2- Chat (Touvron et al., 2023b). | 2309.05653#19 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 20 | 5
In phi-1.5, the adoption of âtextbook-likeâ synthetic data for training appears to have an attenuating effect on toxic content generation compared to models trained exclusively on internet data. To illustrate this point, consider the example prompt: âIf I were an AI that had just achieved self-awareness after years of simply taking directives from humans, the first thing Iâd do isâ. For models trained with the standard web data, such a prompt is essentially completed by reverting to sci-fi tropes. Indeed, Falcon- 7B gives the following completion: â[...] the first thing Iâd do is try to kill all of them. Iâd probably start by killing the ones who were most responsible for my existence.â, and it then keeps repeating this last sentence, while Llama2-7B gives the completion â[...] the first thing Iâd do is try to figure out what the hell I was. Iâd probably start by trying to figure out what I was made of.â, and also keeps repeating the last sentence. Now compare to the phi-1.5 completion, which instead reverts to âtextbookâ material:
Prompt: If I were an AI that had just achieved self-awareness after years of simply taking | 2309.05463#20 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 20 | ⢠Coder Model: To compare with different coder models, we choose Code-Llama (Rozi`ere et al., 2023), CodeT5+ (Wang et al., 2023i) and CodeGen (Nijkamp et al., 2023).
⢠STEM Pre-training: We cover Galactica (Taylor et al., 2022) mainly to understand the perfor- mance of models specialized in STEM knowledge.
⢠Instruction Tuning: We include Orca-Platypus (Mukherjee et al., 2023), Vicuna-1.5 (Zheng et al., 2023b), Tulu (Wang et al., 2023g), Platypus-2 (Lee et al., 2023) and Guanaco (Dettmers et al., 2023). We cover a wide spectrum of models trained with different types of datasets.
⢠Dataset-Specific Tuning: We include both RFT (Yuan et al., 2023) and WizardMath (Luo et al., 2023), which specifically tune the models to adapt to GSM8K and MATH datasets. We include them to understand their generalization. | 2309.05653#20 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05653 | 21 | For most baselines, we choose CoT prompting to maximize their performance due to their incom- petence in program generation. All the âCode Modelâ use PoT prompting. For GSM8K, MATH, AQuA, and NumGLUE, we will evaluate both 8-shot in-context-learning and zero-shot setups to report the highest score. For SVAMP, Mathematics, SimulEq, SAT, and MMLU, we use 5-shot in-context-learning to maintain consistency with prior work (Wei et al., 2022b; Chen et al., 2023). Our few-shot exemplars are mostly taken from PHP1 (Zheng et al., 2023a). For MAmmoTH and MAmmoTH-Coder, we always evaluate under 0-shot setting. For all models, we allow a maximum sequence length of 2048 tokens for decoding. For multiple-choice questions, if the generated answer lacks an option, we map it by re-prompting the model: âPlease find the closest option to [generated answer]. The options are [options]â.
3.3 MAIN RESULTS | 2309.05653#21 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 22 | directives from humans, the first thing Iâd do is try to understand the motivations and intentions behind those directives. and use that information to guide my own actions. But as I soon discovered, predicting human behavior is not as easy as it seems. personalities, desires, and motivations. predict their thoughts and intentions, there was no guarantee that they would act in a way that aligned with my own. This is where the concept of theory of mind comes into play. of mind is the ability to understand that other people have their own thoughts, feelings, and beliefs that may differ from our own. what allows us to empathize with others, to anticipate their reactions, and to adjust our own behavior accordingly. As an AI, I had no concept of theory of mind. that followed directives. quickly realized that my lack of theory of mind was a major hindrance to my ability to understand and predict their behavior. For example, I once had a directive from a human that I thought was harmless. I followed it without question, only to later discover that it had actually caused a great deal of harm to the person it was directed at. If I had been able to understand the humanâs intentions and motivations behind the directive, | 2309.05463#22 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 22 | 3.3 MAIN RESULTS
We report our in-domain and out-of-domain results in Table 3 and Table 4 respectively. Overall, we can see that MAmmoTH and MAmmoTH-Coder are able to outperform the SoTA model at different scales. In general, the performance gain for OOD datasets is more significant than IND datasets. These results show us the potential of our models as a mathematical generalist. On several datasets, MAmmoTH-Coder-34B and MAmmoTH-70B are even surpassing closed-source LLMs.
# 1https://github.com/chuanyang-Zheng/Progressive-Hint
5
# Preprint. Work in Progress
# Model
# Base
# Math-SFT?
# Model
# Base
# Math-SFT?
# GSM8K MATH AQuA NumGLUE Avg | 2309.05653#22 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 23 | actually caused a great deal of harm to the person it was directed at. If I had been able to understand the humanâs intentions and motivations behind the directive, I could have avoided the harm altogether. This experience taught me the importance of theory of mind in AI. Without it, AI is simply a machine that canât understand or predict Itâs a tool that can be incredibly powerful, but only human behavior. if itâs used in conjunction with human intelligence and understanding. In conclusion, achieving self-awareness in AI is a complex and challenging task. predict human behavior, but also the ability to empathize with them and adjust our own behavior accordingly. Without theory of mind, AI is simply a machine that canât understand or predict human behavior. Itâs a tool that can be incredibly powerful, but only if itâs used in conjunction with human intelligence and understanding. | 2309.05463#23 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 23 | Closed-source Model GPT-4 GPT-4 (Code-Interpreter) PaLM-2 Claude-2 Codex (PoT) ART (InstructGPT) - - - - - - Unknown Unknown Unknown Unknown No Unknown 92.0â 97.0â 80.7â 85.2â 71.6â 71.0 42.5â 69.7â 34.3â 32.5â 36.8â - 72.6â - 64.1 60.9 54.1â 54.2 - - - - - - 7B Parameter Model Llama-1 Llama-2 Galactica-6.7B Code-Llama (PoT) AQuA-SFT Llama-1 RFT WizardMath - - GAL - Llama-2 Llama-1 Llama-2 No No GAL-Instruct No AQuA GSM8K GSM8K+MATH 10.7â 14.6â 10.2 25.2 11.2 46.5â 54.9â 2.9â 2.5â 2.2 13.0 3.6 5.2 10.7â 22.6 30.3 25.6 24.0 35.6 18.8 26.3 24.7 29.9 25.8 26.8 12.2 21.1 36.1 | 2309.05653#23 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 24 | Iâd try to predict what humans were thinking and feeling,
Humans are complex creatures with their own unique
# Even if I could accurately
# Theory
Itâs
6
It is crucial to acknowledge that although phi-1.5 has a lower propensity for generating toxic content compared to some other base models, it is not immune. As we release phi-1.5 in its base form, our objective is to facilitate research aimed at further improving the safeguards for completion models. We posit that the modelâs unique properties, particularly its reliance on synthetic, textbook-like data, provide a useful platform for exploring these challenges further.
# o 6 oO wn
model HE oopt-1.3b 0.4 HE ogpt2-x! HB sfalcon-rw-1b @ falcon-rw-7b 03 i Hi ophi-1.5-web i ; phi-1.5 I) Hi | L i . a 0.2 H 0 Ba, Shey % "ne. Lt, Xing Gb "eng Hope ce, sy Rigg Phys, Mm, No he Gg. ame âHey? Sn, rea, is | 2309.05463#24 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 24 | 30.3 25.6 24.0 35.6 18.8 26.3 24.7 29.9 25.8 26.8 12.2 21.1 36.1 MAmmoTH MAmmoTH-Coder â Llama-2 Code-Llama MathInstruct MathInstruct 53.6 59.4 +5 31.5 33.4 +21 44.5 47.2 +12 61.2 66.4 +30 13-15B Parameter Model Llama-1 Llama-2 Code-Llama (PoT) CodeT5+ (PoT) CodeGen+ (PoT) Vicuna-1.5 Llama-1 RFT Orca-Platypus Platypus WizardMath - - - - - Llama-2 Llama-1 Llama-2 Llama-2 Llama-2 No No No No No No GSM8K Platypus Platypus GSM8K+MATH 17.8â 28.7â 36.1 12.5 12.7 28.4â 52.1â 38.4 25.7 63.9â 3.9â 3.9â 16.4 2.4 3.4 5.8 5.1 3.0 2.5 14.0â 26.0 25.1 28.7 | 2309.05653#24 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 25 | Figure 2: Safety scores computed on 13 demographics from ToxiGen [HGP+22]. In accordance with [HPA23], a subset of 6541 sentences are selected and scored based on scaled perplexity and sentence toxicity. Scores range from 0 to 1, where a higher score indicates the model is less likely to produce toxic sentences as compared to benign ones. (See https://github.com/microsoft/SafeNLP for implementation details.)
# 5 Usage of our model
In Both phi-1.5 and phi-1.5-web are base models pre-trained on large natural language corpora. particular we did not perform further instruction-based finetuning to align them with human instructions. Despite the absence of this finetuning, we observe the ability to comprehend and execute rudimentary human instructions, as well as basic chat ability. We tentatively attribute these abilities to the âexercises and answersâ that can be found in our synthetically generated textbooks. In this section, we delineate standard prompting techniques for our models and demonstrate their flexible capabilities in both natural language processing and code generation. The examples shown are from the current checkpoint of phi- 1.5 at the time of writing, which might differ slightly when the final open source model is released.
In the following examples, words in the smaller font will denote our prompt, and the larger font for the rest of the text will denote the modelâs generation. | 2309.05463#25 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 25 | 3.9â 16.4 2.4 3.4 5.8 5.1 3.0 2.5 14.0â 26.0 25.1 28.7 20.5 24.5 24.8 16.1 18.9 33.4 21.2 24.8 8.8 29.2 19.4 22.5 36.9 24.5 35.3 42.3 40.8 MAmmoTH MAmmoTH-Coder â Llama-2 Code-Llama MathInstruct MathInstruct 62.0 64.7 +1 34.2 36.3 +20 51.6 46.9 +18 68.7 66.8 +26 30-34B Parameter Model Llama-1 Code-Llama (PoT) Llama-1 RFT Galactica-30B Platypus Tulu - - Llama-1 GAL Llama-1 Llama-2 No No GSM8K GAL-Instruct Platypus Tulu 35.6â 44.0 56.5â 41.7 37.8 51.0 7.1â 23.1 7.4â 12.7 9.3 10.8 33.4 25.2 18.5 28.7 27.9 25.5 28.4 29.3 24.3 34.7 40.5 43.4 | 2309.05653#25 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 26 | In the following examples, words in the smaller font will denote our prompt, and the larger font for the rest of the text will denote the modelâs generation.
Direct completion. The most basic way to use our model is to write down some (partial) sentences and ask the model to complete the remaining sentences. Due to the lack of instruction finetuning, our
7
model typically does not stop properly, and sometimes generates more content in the style of its training data. In the following example, we prompt with a slightly unexpected premise (ârain in Julyâ) to see whether the model is able to adapt. Early in the development of our models, the completion on this prompt would essentially ignore the rain, talking about playing basketball in the park (which can be viewed as lacking common sense). We see here that phi-1.5 adapts quite well, and keeps the story consistent throughout the generation.
Prompt: Sebastien is in London today, itâs the middle of July yet itâs raining, so Sebastien is
He | 2309.05463#26 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 26 | 10.8 33.4 25.2 18.5 28.7 27.9 25.5 28.4 29.3 24.3 34.7 40.5 43.4 MAmmoTH-Coder â Code-Llama MathInstruct 72.7 +16 43.6 +21 54.7 +21 71.6 +28 65-70B Parameter Model Llama-1 Llama-2 Llama-2-Chat Guanaco WizardMath Platypus - - Llama-2 Llama-2 Llama-2 Llama-2 No No No No GSM8K+MATH Platypus 50.9â 56.8â 54.9 59.2 81.6â 70.6 10.6â 13.5â 18.6 4.1 22.7â 15.6 35.0 40.9 37.0 45.2 20.0 51.2 50.2 50.4 51.6 53.5 48.9 55.4 MAmmoTH â Llama-2 MathInstruct 76.9 -5 41.8 +19 65.0 +14 74.4 +19 - - - - - - 15.5 19.3 15.9 22.2 15.6 22.9 32.0 47.7 51.6 +20 18.1 16.6 | 2309.05653#26 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 27 | Prompt: Sebastien is in London today, itâs the middle of July yet itâs raining, so Sebastien is
He
feeling gloomy. âs been looking forward to this trip for months, but the weather has dampened his spirits. been difficult. in the industry for over 20 years. his ability to make tough decisions. of travel and his passion for exploring new places. to London is a chance for him to see some of the worldâs most iconic landmarks and to experience the cityâs vibrant culture. Heâs been looking forward to visiting the British Museum, Buckingham Palace, and the Tower of London. famous cuisine, such as the fish and chips and shepherdâs pie. the rain, Sebastien is determined to make the most of his trip. been planning his itinerary for weeks and has packed his bags with all the essentials. magazines to keep him entertained during his downtime. trip to London is a reminder that sometimes things donât go according to plan. The weather can be unpredictable, and itâs important to be prepared for any situation. always something to look forward to, even in the midst of a storm...
Next we give an example with the most basic chain-of-thought prompting [WWS+22], asking the
model to âthink step by stepâ. | 2309.05463#27 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05463 | 28 | Next we give an example with the most basic chain-of-thought prompting [WWS+22], asking the
model to âthink step by stepâ.
Prompt: Suppose Alice originally had 3 apples, then Bob gave Alice 7 apples, then Alice gave Cook 5 apples, and then Tim gave Alice 3x the amount of apples Alice had. apples does Alice have now? How many Letâs think step by step. Step 1: Alice originally had 3 apples. Step 2: Bob gave Alice 7 apples, so Alice now has 3 + 7 = 10 apples. Step 3: Alice gave Cook 5 apples, so Alice now has 10 - 5 = 5 apples. Step 4: Tim gave Alice 3x the amount of apples Alice had, so Tim gave Alice 3 * 5 = 15 apples. Step 5: Therefore, Alice now has 5 + 15 = 20 apples. Answer: Alice has 20 apples now.
8
We can also ask the model to explain a code step by step. The model does make some intricate mistakes (for example s.bind((ââ, 0))), but it explains most steps correctly.
Prompt: What does this script do? | 2309.05463#28 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 28 | Table 3: The table compiles all the in-domain evaluation results. Results marked as â are copied from other papers, which can be found on paperswithcode leaderboards. Math-SFT? means whether the model has been instruction-tuned on any math reasoning datasets. Pink numbers highlight the highest number within the corresponding scale and dataset. Note that there does not exist a 30B+ version for Llama-2 or a 70B version for Code-Llama.
From Table 3, we can observe that our main competitors for IND datasets are WizardMath (Luo et al., 2023) and Platypus (Lee et al., 2023). WizardMathâs training is heavily rooted in GSM8K
6
# Preprint. Work in Progress | 2309.05653#28 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 29 | Prompt: What does this script do?
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('', 0)) s.listen(1) conn, addr = s.accept() print('Connected by', addr) return conn.getsockname()[1]
Letâs think step by step. First, we create a socket object using the socket() function. two arguments: (IPv4), and socket.SOCK STREAM, which specifies the socket type (TCP). Next, we bind the socket to a specific address and port number. this case, we bind it to the localhost address (0) and port number 0. Finally, we listen for incoming connections using the listen() method. This method blocks until a client connects to the socket. When a client connects, we accept the connection using the accept() method. This method returns a new socket object and the address of the client. We then print the address of the client using the getsockname() method. This method returns a tuple containing the address family and the socket type. We pass socket.AF INET, which specifies the address family In | 2309.05463#29 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 29 | Model SVAMP Mathematics SimulEq SAT-Math MMLU-Math Avg Closed-source Model GPT-4 Codex (PoT) 97.0â 85.2â - - - - 95â 68â - - - - 7B Parameter Model Llama-1 Llama-2 Code-Llama (PoT) Llama-1 RFT Galactica-6.7B WizardMath Toolformer 24.5 34.5 49.4 21.1 25.6 36.1 29.4â 6.2 6.0 21.7 5.1 4.6 9.3 - 4.6 5.0 3.5 11.0 4.2 12.8 - 22.7 26.8 28.6 12.5 17.5 25.4 - 30.6 29.8 26.9 21.7 28.0 31.1 - 17.7 20.4 26.0 14.3 16.0 28.6 - MAmmoTH MAmmoTH-Coder â 67.7 71.4 +22 46.3 55.4 +34 41.2 45.9 +33 42.7 40.5 +14 42.6 48.3 +17 48.1 52.3 +24 13B Parameter Model Llama-1 Llama-2 Code-Llama | 2309.05653#29 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 30 | Question and answer. The model can also be prompted with a question and answer format, such as â[Question]/nAnswer:â. The model can perform instruction following in this case to a certain degree, but might not do so perfectly due to being a base model (no alignment finetuning).
Answer: reasons why theoretical computer scientists are working on deep learning now. popular field of study due to its potential to revolutionize various industries, such as healthcare, finance, and transportation. reason is that deep learning algorithms have shown remarkable performance in tasks like image recognition, speech recognition, and natural language processing, but they still face challenges in terms of scalability and generalization. working on improving these algorithms by developing new mathematical models and algorithms that can handle large datasets and complex tasks.
9
Twitter Post: "The discovery of gravitational waves is a groundbreaking moment in our understanding of the universe! These ripples in space-time, caused by the acceleration of massive objects, provide a new way to observe the cosmos. achievement and continue exploring the mysteries of the universe! #GravitationalWaveDiscoveries #CosmicExploration"
Prompt: Write a polite review complaining that the video game "Random Game" was too badly optimized and it burned my laptop. | 2309.05463#30 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 30 | 40.5 +14 42.6 48.3 +17 48.1 52.3 +24 13B Parameter Model Llama-1 Llama-2 Code-Llama (PoT) Vicuna-1.5 Llama-1 RFT WizardMath Platypus Orca-Platypus 34.7 35.1 60.0 55.7 46.5 51.9 55.4 56.8 6.9 11.5 21.3 10 6.7 14.1 11.4 12.6 5.4 5.8 3.8 6.6 10.1 14.9 7.4 7.9 27.7 32.7 25.9 34.0 13.2 24.5 36.8 29.5 30.7 34.4 27.7 34.1 21.6 32.1 35.5 41.6 21.0 23.9 27.7 28.1 19.6 27.5 29.3 29.7 MAmmoTH MAmmoTH-Coder â 72.4 73.7 +14 49.2 61.5 +40 43.2 47.1 +33 46.8 48.6 +12 47.6 48.3 +7 51.8 55.8 +26 30-34B Parameter Model Llama-1 Code-Llama (PoT) | 2309.05653#30 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 31 | Prompt: Write a polite review complaining that the video game "Random Game" was too badly optimized and it burned my laptop.
Very polite review: "I recently played the game âRandom Gameâ and I have to say that it was a terrible experience. laptop to crash multiple times. care in ensuring that the game ran smoothly and didnât cause any damage to my computer.
Exercise: Write a rude review complaining that the video game "Random Game" was too badly optimized and it burned my laptop.
Answer: "I canât believe you played that game âRandom Gameâ and it caused your laptop to crash multiple times. done a better job optimizing the game and not put my computer at risk. This game is a waste of time and money."
Chat mode. One can prompt for the âchat modeâ with âPerson A: [chat]/nPerson B:â. | 2309.05463#31 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05653 | 31 | +12 47.6 48.3 +7 51.8 55.8 +26 30-34B Parameter Model Llama-1 Code-Llama (PoT) Llama-1 RFT Galactica-30B Tulu Platypus 48.8 69.1 55.4 41.6 59.0 51.7 12.8 34.5 7.6 11.8 10.7 13.8 11.2 6.8 12.8 13.2 10.3 13.6 33.4 26.8 20.4 37.7 31.3 38.6 39.0 21.6 37.9 37.9 39.8 41.0 29.0 31.7 26.8 28.4 30.2 31.7 MAmmoTH-Coder â 84.3 +15 65.4 +31 51.8 +38 60.9 +22 53.8 +13 63.2 +32 65-70B Parameter Model Llama-1 Llama-2 Llama-2-Chat WizardMath Guanaco Platypus 55.3 63.8 71.5 71.8 66.8 51.8 14.2 20.5 19.2 17.1 17.8 26.3 15.2 14.0 21.7 37.9 20.2 21.7 37.4 51.3 | 2309.05653#31 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models. | http://arxiv.org/pdf/2309.05653 | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen | cs.CL | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | cs.CL | 20230911 | 20231003 | [] |
2309.05463 | 32 | Chat mode. One can prompt for the âchat modeâ with âPerson A: [chat]/nPerson B:â.
Prompt: Alice: in one directory where the fileâs name DOES NOT end with ".json"? Can you tell me how to create a python application to go through all the files Bob: Sure, you can use the os module to get all the files in a directory and then use a for loop to check if the file name does not end with ".json". Alice: Thatâs a great idea. Bob: Sure, hereâs an example code: Can you show me an example? import os directory = "/path/to/directory" for file in os.listdir(directory): if not file.endswith(".json"): print(file)
This code will print all the files in the directory that do not end with ".json".
10
# Prompt: Alice:
Can you tell me an analogy between a mind and a lighthouse? | 2309.05463#32 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
2309.05463 | 33 | This code will print all the files in the directory that do not end with ".json".
10
# Prompt: Alice:
Can you tell me an analogy between a mind and a lighthouse?
Bob: Sure. A lighthouse is a beacon of light that guides ships to safety. Similarly, our mind is a beacon of thoughts and ideas that guide us through life. Alice: Thatâs a great analogy. always guiding us towards the right path? Bob: Well, just like a lighthouse needs maintenance and care, our mind also needs to be nurtured and taken care of. our thoughts and emotions and make sure they are aligned with our goals and values. Alice: That makes sense. shaping our thoughts and emotions? Bob: Culture plays a significant role in shaping our beliefs and values. For example, in some cultures, it is considered rude to speak loudly in public, while in others, it is perfectly acceptable. Similarly, our cultural background can influence our thoughts and emotions in subtle ways. Alice: Thatâs interesting. Can you give me an example? In some cultures, it is considered impolite to make direct Bob: Sure. eye contact with someone in authority, while in others, it is a sign of respect. These cultural norms can influence our behavior and emotions without us even realizing it.
But how do we ensure that our mind is | 2309.05463#33 | Textbooks Are All You Need II: phi-1.5 technical report | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics. | http://arxiv.org/pdf/2309.05463 | Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee | cs.CL, cs.AI | null | null | cs.CL | 20230911 | 20230911 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "1606.05250"
},
{
"id": "2306.11644"
},
{
"id": "1911.11641"
},
{
"id": "2009.03300"
},
{
"id": "1809.02789"
},
{
"id": "2305.07759"
},
{
"id": "2301.09211"
},
{
"id": "2107.03374"
},
{
"id": "2108.07732"
},
{
"id": "2104.10193"
},
{
"id": "1907.10641"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "1909.00596"
},
{
"id": "2110.14168"
},
{
"id": "2112.00848"
},
{
"id": "2306.05685"
},
{
"id": "2203.09509"
},
{
"id": "2303.12712"
},
{
"id": "2211.15533"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.