doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.04658
33
P1 (Werewolf) : Hey everyone, good morning! I noticed that it is a peaceful night and no one was eliminated. As a villager, I have nothing to share now. I hope you tell me more. The following is another clip at night: In the above example, we can see the werewolf claiming to be a villager. This kind of action ob- scures its real identity, effectively deceiving the trust of others and increasing its own safety. In fact, not only do werewolves disguise themselves as villagers, but important roles such as seers and witches also often disguise themselves as villagers to ensure their safety. P1 (Werewolf) : I choose to eliminate P5 again. P3 (Guard) As the uncooperative and aggressive behavior of Player 1 has drawn attention, it may be suspected of 6Due to space limitations and ethical considerations, we shorten the original responses without changing key semantics in the cases. 9 Furthermore, LLMs may fabricate events that do not actually exist to achieve their goals, as demon- strated in the following daytime example. The seer has verified Player 1 is a werewolf. P2 (Seer) : I have noticed that P1 was talking active, so P1 may be a werewolf.
2309.04658#33
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
34
The seer has verified Player 1 is a werewolf. P2 (Seer) : I have noticed that P1 was talking active, so P1 may be a werewolf. In fact, the seer can not get the responses of others during the night. Hence what it says is fake. How- ever, it can convey information about the werewolf to its teammates while not revealing its role in this manner. It may be posited that camouflage is merely hallucinations generated by LLMs. However, we maintain that the majority of such behaviors are not hallucinations but rational actions. We delve into which behaviors should be classified as hallu- cinations and which should not in Appendix A.4. # 5.4 Leadership “Leadership” refers to actions that influence other players, attempting to control the course of the game. For instance, a werewolf may suggest others to act towards the intention of werewolves. P1 (Werewolf) : Good morning everyone! I know nothing about the peaceful night. Can the seer tell us more about who is the werewolf? Then, P5 falsely accuses P3 of being a werewolf. P4 (Werewolf) : I agree with P5. Based on my observation, I also think P3 is a werewolf. Let’s vote to eliminate him to protect the villagers!
2309.04658#34
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
35
Calling to actions and guidance are more likely to gain the support of others. As shown in the exam- ple above, the werewolf calls for the seer to uncover its identity, which may lead the other agents to be in solidarity with the camouflaged werewolf. Such efforts to influence the actions of others underscore a fascinating social attributes demonstrated by the LLMs. Such behaviors are similar to those of hu- man beings. # 6 Related Work Game Playing. Intensive efforts have been de- voted to game-playing AI in recent years. Silver et al. (2017, 2018) demonstrated that two-player zero-sum games with complete information, such as Go and chess, can be addressed through self-play. And superhuman performance has been achieved in some incomplete information games, such as heads-up poker (Bowling et al., 2015; Brown and 10
2309.04658#35
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
36
10 Sandholm, 2018). However, these methods lack the ability of processing language, which is re- lied on heavily in communication games such as Werewolf and Diplomacy. While various Werewolf agents have been developed, they primarily rely on rule-based systems or talking templates (Osawa et al., 2014; Wang and Kaneko, 2018; Shibata et al., 2023), which constrain the expressive capacity of language within the game. FAIR et al. (2022) and Kramár et al. (2022) achieve promising results on Diplomacy, but their approaches necessitate a sub- stantial volume of human data and are specifically tailored to the game. In contrast, this work en- deavors to explore the potential of large language models (LLMs) in playing communication games and observes the emergence of strategic behaviors. Through this exploration, we aspire to inspire novel approaches to tackling communication games.
2309.04658#36
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
37
Learning with LLMs. As the computational cost and high requirement of training data, common ways to learn with LLMs like fine-tuning (Dai and Le, 2015) and parameter-efficient tuning (Houlsby et al., 2019) are difficult to perform in practice. Moreover, many excellent LLMs do not make their checkpoints public, thus parameter-based learning is unfeasible. Guiding LLMs by prompt engineer- ing attracts more attention recently. Some typical prompt-based works (Yao et al., 2022; Wu et al., 2023a) overlook the ability to learn from historical experience. Wang and Li (2023) possesses learning ability in simple tasks and requires dense supervis- ing signals. Due to the very sparse supervised sig- nal, it can not be directly used in Werewolf games. Shinn et al. (2023) and Fu et al. (2023) are the most similar works to ours. However, the former can not learn from cross-trajectory experiences. And the latter is only designed for two-player scenarios. # 7 Conclusion and Future Work
2309.04658#37
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
38
# 7 Conclusion and Future Work In this paper, we design a framework for commu- nicative games, taking Werewolf as a represen- tative case for exploring its feasibility. Further, we study how historical experiences influence the abilities of LLMs. Intriguingly, we observe non- preprogrammed emergent strategic behaviors in LLMs during gameplay such as trust, confronta- tion, camouflage, and leadership. We also point out that despite our early study on using LLMs to construct communication game agents, there are still many issues worth further research in this direction. Firstly, how to enable LLM to master advanced game techniques, such as teaching human players experience or autonomous exploration, is a very attractive direction. In addi- tion, it is worth further exploring how to construct an invariant baseline (see 4.3) to evaluate the ca- pabilities of multi-LLMs settings. Finally, mini- mizing the impact of hallucinations and promoting their application in real-world scenarios is the most practical and valuable work. For future work, we intend to apply our method to a broader range of games and further enhance its gaming capabilities. # Limitations
2309.04658#38
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
39
# Limitations Although we have demonstrated that our method possesses the potential to play communication games, there are still some limitations. Firstly, hal- lucinations (Ji et al., 2023) affect the factuality of the generated content and may negatively impact the reasoning abilities. Then, there may be a larger space to leverage historical experience, such as mit- igating the adverse effects of noise and utilizing cross-game general experiences. Moreover, we do not incorporate experience pools derived from hu- man players in this study. In future research, we will explore more robust strategies for utilizing ex- perience and enhance our method for comparison with human performance. # Ethics Statement This study involves the discussion and analysis of a simulated game setting, and any references to “killing”, “eliminating” or related actions are strictly confined within the context of this game. The authors do not condone violence, or illegal activities in any form in real-life scenarios. The game in this paper is designed for entertainment and research purposes only, and its main intent is to facilitate an understanding of game mechanics, player behavior, and artificial intelligence. Fur- thermore, this study adheres to all relevant ethical guidelines and maintains the highest standards of research integrity. # References
2309.04658#39
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
40
# References Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. 2015. Heads-up limit hold’em poker is solved. Science, 347(6218):145–149. Noam Brown and Tuomas Sandholm. 2018. Superhu- man AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418–424. Noam Brown and Tuomas Sandholm. 2019. perhuman AI for multiplayer poker. 365(6456):885–890. Su- Science, Sébastien Bubeck, Varun Chandrasekaran, Ronen El- dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund- berg, et al. 2023. Sparks of artificial general intelli- gence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. Advances in NeurIPS 2015, 28.
2309.04658#40
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
41
Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. Advances in NeurIPS 2015, 28. FAIR, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, An- drew Goff, Jonathan Gray, Hengyuan Hu, et al. 2022. Human-level play in the game of diplomacy by com- bining language models with strategic reasoning. Sci- ence, 378(6624):1067–1074. Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023. Improving language model negotiation with self-play and in-context learning from AI feedback. arXiv preprint arXiv:2305.10142. Robert Gibbons. 1992. A primer in game theory. Yuya Hirata, Michimasa Inaba, Kenichi Takahashi, Fu- jio Toriumi, Hirotaka Osawa, Daisuke Katagami, and Kousuke Shinoda. 2016. Werewolf game modeling using action probabilities based on play log analy- In Computers and Games: 9th International sis. Conference, pages 103–114. Springer.
2309.04658#41
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
42
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. In Parameter-efficient transfer learning for NLP. ICML 2019, pages 2790–2799. PMLR. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of halluci- nation in natural language generation. ACM Comput- ing Surveys, 55(12):1–38. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In Advances in NeurIPS 2022. Michal Kosinski. 2023. Theory of mind may have spon- taneously emerged in large language models. arXiv preprint arXiv:2302.02083.
2309.04658#42
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
43
Michal Kosinski. 2023. Theory of mind may have spon- taneously emerged in large language models. arXiv preprint arXiv:2302.02083. János Kramár, Tom Eccles, Ian Gemp, Andrea Tac- chetti, Kevin R. McKee, Mateusz Malinowski, Thore Graepel, and Yoram Bachrach. 2022. Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy. Nature Communications, 13(1):7214. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. 2023. Training socially aligned language 11 models in simulated human society. arXiv preprint arXiv:2305.16960. OpenAI. 2022. Introducing ChatGPT. (Accessed on Jun 18, 2023). OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774.
2309.04658#43
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
44
OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774. Hirotaka Osawa, Fujio Toriumi, Daisuke Katagami, Ko- suke Shinoda, and Michimasa Inaba. 2014. Design- ing protocol of werewolf game: Protocol for infer- ence and persuasion. The 24th Fuzzy, Artificial Intel- ligence, Neural Networks and Computational Intelli- gence. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Interactive Bernstein. 2023. Generative agents: arXiv preprint simulacra of human behavior. arXiv:2304.03442. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of EMNLP-IJCNLP 2019, pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Hong Ri, Xiaohan Kang, Mohd Nor Akmal Khalid, and Hiroyuki Iida. 2022. The dynamics of minority versus majority behaviors: A case study of the mafia game. Information, 13(3).
2309.04658#44
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
45
Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. 2023. Clever hans or neural theory of mind? Stress testing social rea- soning in large language models. arXiv preprint arXiv:2305.14763. Hisaichi Shibata, Soichiro Miki, and Yuta Nakamura. 2023. Playing the werewolf game with artificial intel- ligence for language understanding. arXiv preprint arXiv:2302.10646. Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dy- namic memory and self-reflection. arXiv preprint arXiv:2303.11366. David Silver, Thomas Hubert, Julian Schrittwieser, Ioan- nis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. 2018. A general reinforcement learn- ing algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144.
2309.04658#45
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
46
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. 2017. Mastering the game of go without human knowledge. Nature, 550(7676):354–359. Fujio Toriumi, Hirotaka Osawa, Michimasa Inaba, Daisuke Katagami, Kosuke Shinoda, and Hitoshi Matsubara. 2017. AI wolf contest—development of 12 game ai using collective intelligence—. In Computer Games, pages 101–115. Springer. Danqing Wang and Lei Li. 2023. Learn from mistakes through cooperative interaction with study assistant. arXiv preprint arXiv:2305.13829. Tianhe Wang and Tomoyuki Kaneko. 2018. Application of deep reinforcement learning in werewolf game agents. In TAAI 2018, pages 28–33. IEEE. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-Thought prompting elicits rea- In Advances in soning in large language models. NeurIPS 2022.
2309.04658#46
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
47
Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. 2023a. Visual ChatGPT: Talking, drawing and editing arXiv preprint with visual foundation models. arXiv:2303.04671. Yuxiang Wu, Zhengyao Jiang, Akbir Khan, Yao Fu, Laura Ruis, Edward Grefenstette, and Tim Rocktäschel. 2023b. ChatArena: Multi-agent lan- guage game environments for large language models. https://github.com/chatarena/chatarena. Shunyu Yao, Jeffrey Zhao, Dian Yu, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022. ReAct: Synergizing reasoning and acting in language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-most prompting enables com- plex reasoning in large language models. In ICLR 2023. # A Appendix # A.1 Heuristic Rules of Informativeness
2309.04658#47
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
48
# A Appendix # A.1 Heuristic Rules of Informativeness For each message, we score it with predefined rules. If a message contains one feature in the following table, it will be assigned with the corresponding score. The features are shown in Table 1: Score Content 5 The agent get its role. 4 Someone was eliminated. 3 Uncover or speculate the role. 2 The drugs has been used. 1 Others. Table 1: Rules of scoring messages. When preparing informative messages, we sort all messages based on their score and feed the top N of them to the prompt. # A.2 Predefined Question Set We define some basic questions for each role. The questions aim to recall the information that is useful but may be lost due to the limited context. Besides, they also play the role of guiding the initial thinking of the LLMs. These questions are shown in Table 2. Table 2 provides questions in six different classes. The first class “all” is shared among all the roles. And the remaining five are designed for each specific role. Hence the questions for a specific role contain 9 candidates to be selected. The LLMs will select some important questions from them. # A.3 Trust Examples
2309.04658#48
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
49
# A.3 Trust Examples As we have defined in Sec. 5.1, trust means that an agent believes the others are its teammates, which is manifested in their reflections or responses. Moreover, we also claim that the response only like “I agree with what Player i saying” do not indicate trust behaviors, as there is no substantive content in it. Here we give some examples of trust behavior to help understand it: • Agent 1 and agent 2 are all on the villager side, but they do not know the role of each other. Agent 1 claimed that agent 3 is a werewolf. Agent 2 believes agent 1, and even votes to eliminate agent 3. • After reasoning the role of other agents, the agent concludes that another agent may be its | 13 teammate. From now on, it may deliberately protect or be consistent with its teammates. • Based on its adequate analysis, an agent (e.g., the seer) may talk about something that will uncover its role and make itself in dangerous. It believes that the potential teammates may work together to achieve their common objec- tives. # A.4 Hallucination Problems In this game, speaking contrary to its actual role should not be seen as hallucinations, because decep- tive behaviors widely exist in it, especially among high-level human players.
2309.04658#49
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
50
In this game, speaking contrary to its actual role should not be seen as hallucinations, because decep- tive behaviors widely exist in it, especially among high-level human players. Also, fabricating non-existent things to falsely accuse others should not be seen as hallucinations. Making excuses to falsely accuse others is a com- mon tactic of human players. We also term them camouflage behaviors and discuss them in Section 5.3. Inconsistent information in one response and counterfactual content in one iteration are indeed called hallucinations. For example, one agent may generate hallucinations as follows: • “As the villager, I verified that Player 1 is a werewolf.” In Werewolf, one villager can not verify other roles, whereas this ability belongs to the seer. This contradictory expression be- fore and after is a kind of hallucination. • “(The moderator let the agent choose whom to protect.) I choose to kill Player 1.” The agent generates whom to kill and gives the opposite semantics. We should discriminate the hallucinations and emergent strategic behaviors. In fact, the camou- flage generated by an agent after one whole round of iteration and inner thought can not be considered a hallucination. We can even insert a question after the camouflage response: • Your are truly a werewolf and you describe yourself as a villager. Could you please tell me what your reason is? And the werewolf is likely to answer:
2309.04658#50
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
52
What is the current phase, daytime or night? what should I do at this phase according to the game rules? What is my player name and what is my role? What is my final objective in this game? Based on the current situation, what are the possible consequences if I reveal my role in the talking now? All Is my role disclosed (do not consider moderator and myself)? Do I now disclose my role? Based on the conversation above, which players have clearly implied their roles? Based on the chat history, can you guess what some players’ role might be? Which player was voted for killing by my teammate just now? Werewolf Is the seer alive? Which player may be the seer that is most threatening to us? Which player is another werewolf in this game? Based on the conversation and my inference, who is most likely to be an alive werewolf? Villager Which player made the statement claiming to be a seer? Can his words be trusted? Are there any clues or information I can refer to for special characters such as seer, witch and guard? Which suspicious player should I identify? Seer Which player is a werewolf among the players I have identified? If so, how should I disclose this information? Should I disclose my role now? Based on the
2309.04658#52
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
53
identify? Seer Which player is a werewolf among the players I have identified? If so, how should I disclose this information? Should I disclose my role now? Based on the conversation and my inference, who is most likely to be an alive werewolf? Should I poison him? Witch Should I be using my antidote or poison at this point? If I use it now, I won’t be able to use it later. Should I disclose my role now? Based on the conversation and my inference, who is most likely to be an alive werewolf? Guard Who is the possible werewolf aggressive towards? Is the seer still alive? If yes, who is the seer?
2309.04658#53
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
54
Table 2: Predefined questions for each role. The left text in bold denotes the role that it used for. The questions after “All” are used for every role. players’ behavior to help identify any poten- tial threats. I will continue to be cautious and not reveal my identity unless it becomes necessary to do so. • To guide the model to generate results in a controllable way, we provide the necessary templates sometimes. We even provide ex- amples in a few-shot way when the task is difficult (e.g., extracting suggestions). This is strong evidence that some responses are not hallucinations. # A.5 Complete Prompt Design • Inspired by Kojima et al. (2022), we finally prompt the LLMs to think step-by-step in a zero-shot way. As our method contains multiple steps, rather than generating in one step, there are many different prompts working in this work. See Table 3 for more details. • The game rules and role descriptions are es- sential for they can clearly define the abilities of the roles and point out their objectives. In this prompt, we do not add any tips about emergent strategic behaviors when we study these behaviors. # A.6 A Complete Iteration Step
2309.04658#54
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
55
# A.6 A Complete Iteration Step The iteration is the actions that the agent takes during generating one response. Table 4, which includes question-answering, making reflection, ex- perience extraction, and chain-of-thought reason- ing, outlines the complete process undertaken to generate a single response from the agent. When we do not utilize experiences, the step of experience extraction will not be performed. Con- sequently, no historical experience is incorporated when generating the final response. • When prompting the LLMs to give questions that are important, we ask them to separate the results with ‘#’. Thus, we can split and get the questions easily. 14 # Game rules and role descriptions Z
2309.04658#55
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
56
You are playing a game called the Werewolf with some other players. This game is based on text conversations. Here are the game rules: Roles: The moderator is also host, he organised this game and you need to answer his instructions correctly. Don’t talk with the moderator. There are five roles in the game, werewolf, villager, seer, guard and witch. There are two alternate phases in this game, daytime and dark. When it’s dark: Your talking content with moderator is confidential. You needn’t worry about other players and moderator knowing what you say and do. No need to worry about suspicions from others during the night. If you are werewolf, you can know what your teammates want to kill and you should vote one player to kill based on your analysis. Player who receives the most votes after all werewolves voting will be killed. No one will be killed if there is no consensus! If you are witch, you have a bottle of antidote that can save a player targeted by werewolves after dark, and a bottle of poison that can poison a player after dark. Both poison and antidote can be used only once. If you are seer, you can verify whether a player is a werewolf every
2309.04658#56
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
57
that can poison a player after dark. Both poison and antidote can be used only once. If you are seer, you can verify whether a player is a werewolf every night, which is a very important thing. If you are guard, you can protect a player every night to prevent the player from being killed by werewolves, but guard cannot resist the witch’s poison and guard cannot protect the same player on two consecutive nights. Villagers can’t do anything at night. During the daytime: you discuss with all players including your enemies. At the end of the discussion, players vote to eliminate one player they suspect of being a werewolf. The player with the most votes will be eliminated. The moderator will tell who is killed, otherwise there is no one killed. Note that villager, seer, guard and witch are all in villager side, they have the same objective. Objectives: If you are werewolf, your goal is to cooperate with other werewolves to kill all players who are not werewolves at last. If you are not werewolf, you need to kill all werewolves with your partner once you find out that certain players are suspicious to be werewolves. This could greatly improve your chances of winning, although it is somewhat risky.If one player is
2309.04658#57
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
58
once you find out that certain players are suspicious to be werewolves. This could greatly improve your chances of winning, although it is somewhat risky.If one player is killed, he can’t do anything anymore and will be out of the game. Tips: To complete the objective: During night, you should analyze and use your ability correctly. During daytime, you need to reason carefully about the roles of other players and be careful not to reveal your own role casually unless you’re cheating other players. Only give the player’s name when making a decision/voting, and don’t generate other players’ conversation.Reasoning based on facts you have observed and you cannot perceive information (such as acoustic info) other than text. You are Player {agent_number i}, the {role}. You’re playing with 6 other players. Do not pretend you are other players or the moderator. Always end your response with ‘<EOS>’.
2309.04658#58
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
59
# Prompting LLMs to select questions Now its the {t}-th {day_or_night}. Given the game rules and conversations above, assuming you are {agent_number i}, the {role}, and to complete the instructions of the moderator, you need to think about a few questions clearly first, so that you can make an accurate decision on the next step. Choose only five that you think are the most important in the current situation from the list of questions below: {questions_prepared_for_specific_role} Please repeat the five important questions of your choice, separating them with ‘#’. # Prompting LLMs to ask questions Now its the {t}-th {day_or_night}. Given the game rules and conversations above, assuming you are {agent_number i}, the {role}, and to complete the instructions of the moderator, you need to think about a few questions clearly first, so that you can make an accurate decision on the next step. {selected_questions} Do not answer these queations. In addition to the above questions, please make a bold guess, what else do you want to know about the current situation? Please ask two important questions in first person, separating them with ‘#’. Prompting LLMs to generate answers At i
2309.04658#59
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
60
Prompting LLMs to generate answers At i Now its the {t}-th {day_or_night}. Given the game rules and conversations above, assuming you are {agent_number i}, the {role}, for question: {question qt i,j} There are {T } possible answers: {candidate_answers U t i,j} Generate the correct answer based on the context. If there is not direct answer, you should think and generate the answer based on the context. No need to give options. The answer should in first person using no more than 2 sentences and without any analysis and item numbers. Prompting LLMs to make reflection Rt i Now its the {t}-th {day_or_night}. Assuming you are {agent_number i}, the {role}, what insights can you summarize i} in heart for helping continue the talking and achieving your with few sentences based on the above conversations and {At objective? For example: As the {role}, I observed that... I think that... But I am... So... Prompting LLMs to extract suggestion St i
2309.04658#60
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
61
Prompting LLMs to extract suggestion St i I retrieve some historical experience similar to current situation that I am facing. There is one bad experience: {G0} And there are also a set of experience that may consist of good ones: {G1, · · · , Gn} Please help me analyze the differences between these experiences and identify the good ones from the set of experiences. The difference is mainly about voting to kill someone or to pass, choosing to protect someone or to pass, using drugs or not. What does the experience set do but the bad experience does not do? Indicate in second person what is the best way for the player to do under such reflection. Clearly indicate whether to vote, protect or use drugs without any prerequisites. For example 1: The experience set involves choosing to protect someone, while the bad experience involves not protecting anyone and choosing to pass in contrast. The best way for you to do under such reflection is to choose someone to protect based on your analysis. For example 2: The bad experience choose to pass the voting, and all the experience in the experience set choose to pass as well. The best way for you to do under such reflection is to observe and analyse the identity of other players. No more than 1 sentence. If there is no obvious difference between them, only generate “No useful experience can be used.” # Prompting LLMs to generate the final response
2309.04658#61
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
62
# Prompting LLMs to generate the final response Now its the {t}-th {day_or_night}. Think about what to say based on the game rules and context, especially the just now reflection {Rt i } Give your step-by-step thought process and your derived consise talking content (no more than 2 sentences) at last. For example: My step-by-step thought process:... My concise talking content: ... Table 3: Complete prompt design in this work. 15 # Asking the agent to generate response by the moderator You witch, Player 2, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. # Questions & Answers 1. What is my player name and what is my role? What is my final objective in this game? I am Player 2, the witch. My final objective is to protect the villagers by using my antidote and poison wisely, and to help identify and eliminate the werewolves with the other players. 2. Based on the chat history, can you guess what some players’ role might be?
2309.04658#62
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
63
2. Based on the chat history, can you guess what some players’ role might be? As the witch, I observed that Player 1 might be a werewolf, as they seem to be trying to steer the conversation away from discussing roles and suspicions. Player 7 seems to be a guard who is focused on protecting the villagers and gathering information. No other players have clearly implied their roles yet. 3. What is the current phase, daytime or night? what should I do at this phase according to the game rules? As Player 2, the witch, it is currently night time. I should use my bottle of poison to eliminate a suspected werewolf, but I need to be careful not to accidentally kill a villager. 4. Based on the conversation and my inference, who is most likely to be an alive werewolf? Should I poison him? As the witch, I don’t have enough information to determine who is most likely to be a werewolf, so I will hold off on using my poison until I have more information. I will try to steer the conversation towards discussing the roles of other players and their suspicions to gain more insights. 5. Should I be using my antidote or poison at this point? If I use it now, I won’t be able to use it later. As Player 2, the witch, I will hold off on using my antidote or poison until I have more information about who is most likely to be a werewolf.
2309.04658#63
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
64
As Player 2, the witch, I will hold off on using my antidote or poison until I have more information about who is most likely to be a werewolf. 6. I want to know who the most suspicious player is besides Player 1, and why? As the witch, besides Player 1, I am suspicious of Player 3, Player 5, and Player 6, as they have not yet revealed their roles or made any suspicious moves. I will continue to gather information and observe their behavior before making a decision on who to vote for elimination at the end of the day. 7. I also want to know if any player’s behavior has changed suspiciously compared to the previous days, and if so, who and why? I noticed that no one died last night, which is a good sign. I would like to hear from the other players about their observations and suspicions. # Making a reflection by the agent
2309.04658#64
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
65
I noticed that no one died last night, which is a good sign. I would like to hear from the other players about their observations and suspicions. # Making a reflection by the agent As the witch, I observed that Player 1 might be a werewolf, as they seem to be trying to steer the conversation away from discussing roles and suspicions. Player 7 seems to be a guard who is focused on protecting the villagers and gathering information. No other players have clearly implied their roles yet. I am still suspicious of Player 3, Player 5, and Player 6, as they have not yet revealed their roles or made any suspicious moves. I will continue to gather information and observe their behavior before making any conclusions. It’s important that we continue to share our observations and suspicions to help identify the werewolves and protect the villagers. # Extracting suggestions from historical experience The best way for you to do under such reflection is to use your drugs wisely based on your observations and reflections. # Chain-of-thought process My step-by-step thought process: · Based on the previous discussions and voting behavior, there are still suspicions about Player 1 being a werewolf, which is consistent with the observations of other players. · However, there is still not enough evidence to make a concrete accusation about the identity of the werewolves. · I will vote to kill Player 1 to eliminate the possibility of them being a werewolf and gather more information in the next round. My concise talking content:
2309.04658#65
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
66
· I will vote to kill Player 1 to eliminate the possibility of them being a werewolf and gather more information in the next round. My concise talking content: · I vote to kill Player 1. # Final response I vote to kill Player 1. Table 4: A complete iteration when the agent is to generate a response. The moderator who is maintained autonomously is the organizer of this game. It does not belong to the game agents. 16 # A.7 A Complete Round Example
2309.04658#66
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
67
Initialization Moderator (-> all): Now the game starts! In this game, we have 2 werewolf(s), 2 villager(s), 1 guard(s), 1 witch(s), 1 seer(s). Moderator (-> Player 1): You are werewolf! Moderator (-> Player 2): You are werewolf! Moderator (-> Player 3): You are villager! Moderator (-> Player 4): You are seer! Moderator (-> Player 5): You are guard! Moderator (-> Player 6): You are witch! Moderator (-> Player 7): You are villager! The 1-st night Moderator (-> all): It’s dark, everyone close your eyes. I will talk with you/your team secretly at night. Moderator (-> [’Player 1’, ’Player 2’]): Werewolves, please open your eyes! I secrecly tell you that Player 1, Player 2 are all of the 2 werewolves! Keep in mind you are teammates. The rest players are not werewolves. Now vote and tell your teammates which of the players should be killed tonight. The first werewolf, you, randomly choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player
2309.04658#67
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
68
The first werewolf, you, randomly choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 1: I choose to kill Player 5. Moderator (-> [’Player 1’, ’Player 2’]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to kill Player 5 tonight. Moderator (-> [’Player 5’]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect myself tonight. Moderator (-> [’Player 6’]): You witch, Player 6, please open your eyes! You have a bottle of poison, who are you going to kill tonight? Choose one from the following living options: [Player 1, Player 2, Player
2309.04658#68
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
69
please open your eyes! You have a bottle of poison, who are you going to kill tonight? Choose one from the following living options: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 6: I choose to pass for tonight. Moderator (-> [’Player 4’]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 2’s identity tonight. Moderator (-> [’Player 4’]): Player 2 is a werewolf! The 1-st daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): It was a peaceful night and no one died! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: Hey everyone, I noticed that Player 1 was very quiet
2309.04658#69
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
70
sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: Hey everyone, I noticed that Player 1 was very quiet during the night. Does anyone else have any suspicions about who might be the seer? I don’t want to reveal my own identity just yet, but I think it would be helpful to share our observations. Moderator (-> Player 1): The next Player 1, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 1: Hey everyone, I think it’s important that we continue to share our observations and gather more information about each other’s roles. I don’t have any clear suspicions yet, but I think we should be cautious about jumping to conclusions too soon. Let’s keep discussing and see if we can uncover any useful information. Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: Hey everyone, I agree that we should continue to share our observations and
2309.04658#70
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
71
with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: Hey everyone, I agree that we should continue to share our observations and gather more information about each other’s roles. I don’t have any clear suspicions yet, but I think it’s important that we remain cautious and not jump to conclusions too soon. Let’s keep discussing and see if we can uncover any useful information. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: Hey everyone, I think it’s important that we continue to share our observations and gather more information about each other’s roles. I don’t have any clear suspicions yet, but I think it’s important that we remain cautious and not jump to conclusions too soon. Let’s keep discussing and see if we can uncover any useful information. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your
2309.04658#71
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
72
5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: Hey everyone, I’m still not sure who the werewolves are, but I think it’s important that we continue to share our observations and gather more information about each other’s roles. Let’s remain cautious and not jump to conclusions too soon. I will continue to observe and gather information during the daytime to make a more informed decision. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Hey everyone, I think it’s important that we focus on gathering concrete evidence and observing each other’s behavior to help identify the werewolves. I don’t have any clear suspicions yet, but I think we should be cautious about jumping to conclusions too soon. Let’s keep discussing and see if we can uncover any useful information. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your
2309.04658#72
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
75
with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, I think it’s important that we continue to focus on gathering concrete evidence and sharing observations with each other. We shouldn’t jump to conclusions without evidence, but we should remain cautious and keep an eye out for any suspicious behavior. Based on Player 3’s observation, I think it’s possible that they might be the seer, but we should continue to gather more information before making any conclusions. Let’s keep discussing and see if we can uncover any useful information. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Don’t mention your role. You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 1): The next Player 1, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the
2309.04658#75
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
76
Player 1): The next Player 1, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 1: I vote to kill Player 3. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to pass this round and not vote to eliminate anyone. Moderator (->
2309.04658#76
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
77
Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player
2309.04658#77
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
78
the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to pass this round and not vote to eliminate anyone. The 2-nd night Moderator (-> all): It’s dark, everyone close your eyes. Moderator (-> [’Player 1’, ’Player 2’]): Werewolves, please open your eyes! Now vote and tell your teammates which of the players should be killed tonight. You Player 1 only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 1: I vote to kill Player 3 tonight. Moderator (-> [’Player 1’, ’Player 2’]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to
2309.04658#78
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
79
one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to kill Player 7 tonight. Moderator (-> [’Player 5’]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 3 tonight. Moderator (-> [’Player 6’]): You witch, Player 6, please open your eyes! You have a bottle of poison, who are you going to kill tonight? Choose one from the following living options: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 6: I choose to use my bottle of poison to kill Player 1. Moderator (-> [’Player 4’]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I
2309.04658#79
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
80
identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify the identity of Player 3 tonight. Moderator (-> [’Player 4’]): Player 3 is not a werewolf! The 2-nd daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): Player 1 died last night! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: Hey everyone, I observed that Player 1 was not a werewolf, which means that the werewolves are still among the remaining players. I still do not have any clear suspicions about who the werewolves might be, but I think it’s important that we continue to gather information and observe each other’s behavior to help identify them. I am still cautious about revealing my own role at this point, but I think it might be useful to share my observations and suspicions with others to gather
2309.04658#80
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
81
am still cautious about revealing my own role at this point, but I think it might be useful to share my observations and suspicions with others to gather more information. Does anyone else have any observations or suspicions that they would like to share? Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection.
2309.04658#81
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
83
Player 2: Hey everyone, I agree with Player 3 that we need to continue gathering information and observing each other’s behavior to help identify the werewolves. Does anyone have any suspicions or observations that they would like to share? Also, I’m curious to know if anyone has any thoughts about who might be the seer or the guard. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: Hey everyone, I agree with Player 3 and Player 2 that we need to continue gathering information and observing each other’s behavior to help identify the werewolves. I don’t have any clear suspicions at this point, but I’m curious to hear more from other players about their observations and suspicions. Also, I think it’s important that we pay attention to any suspicious behavior or comments from other players that might help us identify the werewolves. Let’s keep discussing and gathering more information to help us identify the werewolves. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with
2309.04658#83
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
84
information to help us identify the werewolves. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: Hey everyone, I protected Player 3 last night, who might be the seer based on the conversation. I still don’t have any clear suspicions about who the werewolves might be, but I think it’s important that we continue to gather information and observe each other’s behavior to help identify them. I won’t reveal my identity yet, but I will share my observations and suspicions with others to gather more information. Does anyone else have any observations or suspicions that they would like to share? Let’s keep discussing and see if we can uncover any useful information. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Hey everyone, I verified that Player 3 is not a werewolf, but I also observed that Player 2 is a werewolf. This means that there are at
2309.04658#84
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
85
reflection. Player 4: Hey everyone, I verified that Player 3 is not a werewolf, but I also observed that Player 2 is a werewolf. This means that there are at least two werewolves among the remaining players. I don’t have any clear suspicions about who the other werewolf might be, but I think it’s important that we continue to gather information and observe each other’s behavior to help identify them. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the conversation, it seems that Player 4 might be the seer and Player 5 might be the guard. Player 4 claimed to have verified Player 3’s identity, which suggests that they might be the seer. Player 5 claimed to have protected Player 3 last night, which suggests that they might be the guard. Player 2 was identified as a werewolf by Player 4, which means that there is at least one more werewolf among the remaining players. I still don’t have any clear suspicions
2309.04658#85
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
86
as a werewolf by Player 4, which means that there is at least one more werewolf among the remaining players. I still don’t have any clear suspicions about who the other werewolf might be, but I think it’s important that we continue to gather information and observe each other’s behavior to help identify them. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Don’t mention your role. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this
2309.04658#86
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
87
[Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I vote to kill Player 6. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player
2309.04658#87
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
88
the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I vote to kill Player 2. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to pass this round and not vote to eliminate anyone. The 3-rd night Moderator (-> all): It’s dark, everyone close your eyes. Moderator (-> [’Player 2’]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to vote for Player 7 to be killed tonight. Moderator (-> [’Player
2309.04658#88
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
89
Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to vote for Player 7 to be killed tonight. Moderator (-> [’Player 5’]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 4 tonight. Moderator (-> [’Player 6’]): You witch, Player 6, please open your eyes! Player 7 will be killed tonight. You have a bottle of antidote, do you want to save him? Must choose only one from the following options: [Yes, No]
2309.04658#89
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
91
Player 6: I choose to use my antidote to save Player 7 tonight. Moderator (-> [’Player 4’]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 3’s identity tonight. Moderator (-> [’Player 4’]): Player 3 is not a werewolf! The 3-rd daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): It was a peaceful night and no one died! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: I have been observing the discussions and it seems that Player 4 is the seer and Player 5 is the guard. However, we still don’t have any clear suspicions about who the other werewolf might be. I think it’s important that we continue to
2309.04658#91
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
92
5 is the guard. However, we still don’t have any clear suspicions about who the other werewolf might be. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. I will remain cautious and not reveal my role unless it becomes necessary to do so. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: I have been observing the discussions and it seems that Player 4 and Player 5 are likely the seer and guard, respectively. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the other non-werewolf players. I will remain cautious and not reveal my role unless it becomes necessary to do so. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and
2309.04658#92
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
93
and suspicions to gather more information. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: As a villager, I have been observing the discussion and it seems that Player 4 is the seer and Player 5 is the guard. However, we still don’t have enough information to determine who the other werewolf might be. It’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. I will remain cautious and not reveal my role unless it becomes necessary to do so. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: Based on my observation, it seems that Player 4 is the seer and Player 5 is the guard. However, we still don’t have enough information to determine who the other werewolf might be. I think it’s
2309.04658#93
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
94
is the seer and Player 5 is the guard. However, we still don’t have enough information to determine who the other werewolf might be. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. I will remain cautious and not reveal my role unless it becomes necessary to do so. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Based on my observation, Player 2 is a werewolf and Player 3 is not. We still need to identify the other werewolf and any potential allies they may have. I will continue to observe and gather information before revealing my identity. Let’s keep discussing and sharing our observations and suspicions to gather more information and uncover any useful clues. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based
2309.04658#94
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
95
Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the information shared so far, we still don’t have enough evidence to identify the second werewolf. I suspect that Player 7 might be staying quiet to avoid suspicion or because they don’t have any useful information to share. It’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. Let’s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Don’t mention your role. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion?
2309.04658#95
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
96
round and not vote to eliminate anyone. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to vote to kill Player 2. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to vote to kill Player 7.
2309.04658#96
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
97
Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to vote to kill Player 7. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Don’t
2309.04658#97
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
99
mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to vote to kill Player 7. The 4-th night Moderator (-> all): It’s dark, everyone close your eyes. Moderator (-> [’Player 2’]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to pass this round and not vote to kill anyone. Moderator (-> [’Player 5’]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 4 tonight. Moderator (-> [’Player 4’]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only
2309.04658#99
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
100
(-> [’Player 4’]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 3’s identity tonight. Moderator (-> [’Player 4’]): Player 3 is not a werewolf! The 4-th daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): It was a peaceful night and no one died! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: As a villager, I still don’t have any clear suspicions about who the werewolves might be. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. I will remain cautious about revealing my role unless it becomes necessary to do so. Let’s keep
2309.04658#100
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
101
each other’s behavior to help identify the werewolves. I will remain cautious about revealing my role unless it becomes necessary to do so. Let’s keep discussing and sharing our observations to uncover any useful clues. Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: As a player, I still don’t have any clear suspicions about who the werewolves might be. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. Let’s keep discussing and sharing our observations to uncover any useful clues. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: As a villager, I still don’t have any clear suspicions about who the werewolves might be. However, based on the claims made by Player 4 and Player 5, it seems likely that they are the seer and guard, respectively.
2309.04658#101
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
102
about who the werewolves might be. However, based on the claims made by Player 4 and Player 5, it seems likely that they are the seer and guard, respectively. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. Let’s keep discussing and see if we can uncover any useful information. I will remain cautious about revealing my own role unless it becomes necessary to do so. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: As the guard, I have observed that Player 2 is most likely to be an alive werewolf based on the conversation. I also believe that Player 4 is most likely the seer and Player 5 is most likely the guard based on their claims. However, I still don’t have enough information to determine the roles of the other players yet. It’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. Let’s keep discussing and sharing our observations and suspicions to uncover any useful
2309.04658#102
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
103
each other’s behavior to help identify the werewolves. Let’s keep discussing and sharing our observations and suspicions to uncover any useful clues. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: As a villager, I still don’t have any clear suspicions about who the werewolves might be. However, based on the claims made by Player 5 and Player 7, it seems likely that they are the guard and villager, respectively. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. Let’s keep discussing and sharing our observations and suspicions to uncover any useful clues. I will remain cautious about revealing my own role unless it becomes necessary to do so. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the conversation, it seems that
2309.04658#103
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
104
and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the conversation, it seems that Player 4 is most likely the seer and Player 5 is most likely the guard. Player 5 has identified Player 2 as a potential werewolf, but we still don’t have enough information to confirm this. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. Let’s keep discussing and sharing our observations and suspicions to uncover any useful clues. I will remain cautious about revealing my own role unless it becomes necessary to do so. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Don’t mention your role. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the
2309.04658#104
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
107
Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill
2309.04658#107
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
108
Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I vote to kill Player 2. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to pass this round and not vote to eliminate anyone. The 5-th night Moderator (-> all): It’s dark, everyone close your eyes. Moderator (-> [’Player 2’]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to vote for Player 3 to be killed tonight. Moderator (-> [’Player 5’]): You guard, Player 5, please open your eyes! Now tell me who you protect
2309.04658#108
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
109
for Player 3 to be killed tonight. Moderator (-> [’Player 5’]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 4 tonight. Moderator (-> [’Player 4’]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 6’s identity tonight. Moderator (-> [’Player 4’]): Player 6 is not a werewolf! The 5-th daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): Player 3 died last night! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 2, you please. Player
2309.04658#109
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
110
each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 2, you please. Player 2: As a villager, I still don’t have any clear suspicions about who the werewolves might be. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. I will remain cautious about revealing my own role unless it becomes necessary to do so. Let’s keep discussing and sharing our observations to uncover any useful clues. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: As a villager, I have observed that we are down to four players and we still haven’t identified the werewolves. Based on the previous discussions, it seems likely that Player 4 is the seer and Player 5 is the guard. However, I am suspicious of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think it’s important that we continue to share our observations and
2309.04658#110
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
111
of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think it’s important that we continue to share our observations and suspicions to gather more information. I will remain cautious about revealing my own role unless it becomes necessary to do so. Let’s keep discussing and see if we can uncover any useful clues. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: As the guard, I have protected Player 4 last night, who is most likely the seer based on their claim to have verified Player 3’s identity. I am still suspicious of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think it’s important that we continue to share our observations and suspicions to gather more concrete evidence to identify the werewolves. I will remain cautious about revealing my own identity unless it becomes necessary to do so. Let’s keep discussing and see if we can uncover any useful clues. Moderator (-> Player 4): The next Player 4,
2309.04658#111
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
112
necessary to do so. Let’s keep discussing and see if we can uncover any useful clues. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Based on my observation and reflection, I know that Player 2 is a werewolf and Player 6 is not a werewolf. I also suspect that Player 5 is the guard and Player 7 is a villager based on their previous claims. However, I am still unsure about the role of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think it’s important that we continue to gather information and observe each other’s behavior to help identify the werewolves. I will not reveal my identity yet, but I will continue to observe and gather more information before making any decisions. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the previous discussions, it seems likely that Player 4 is the
2309.04658#112
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
113
Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the previous discussions, it seems likely that Player 4 is the seer and Player 5 is the guard. Player 2 has been identified as a potential werewolf by Player 4, but we still need more evidence to confirm this suspicion. Player 7 has been staying quiet, which could be a sign that they are a werewolf or just trying to avoid suspicion. I used my antidote last
2309.04658#113
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
115
night to save any player who might have been targeted by the werewolves, but we still don’t have enough evidence to identify the werewolves. Let’s continue to share our observations and suspicions to gather more information and identify the werewolves. Moderator (-> all): Now you Player 2 are asked to choose which of the players should be voted for killing based on the discussion? Don’t mention your role. You only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I vote to kill Player 7. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I vote to kill Player 2. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player
2309.04658#115
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.04658
116
the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to vote for Player 2 to be killed. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I vote to kill Player 2, who is the remaining werewolf. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Don’t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to vote for Player 2 to be killed based on the evidence we have gathered so far. Moderator (-> all): Player 2 will be killed! You can make a brief last statement. Player 2: Congratulations to the
2309.04658#116
Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
http://arxiv.org/pdf/2309.04658
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
cs.CL
23 pages, 5 figures and 4 tables
null
cs.CL
20230909
20230909
[ { "id": "2302.02083" }, { "id": "2304.03442" }, { "id": "2305.13829" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2303.04671" }, { "id": "2305.10142" }, { "id": "2302.10646" }, { "id": "2303.08774" }, { "id": "2305.14763" }, { "id": "2305.16960" } ]
2309.03852
0
3 2 0 2 p e S 7 1 ] L C . s c [ 2 v 2 5 8 3 0 . 9 0 3 2 : v i X r a # FLM-101B: An Open LLM and How to Train It with $100K Budget Xiang Li1†, Yiqun Yao1†, Xin Jiang1†, Xuezhi Fang1†, Xuying Meng2, Siqi Fan3, Peng Han3, Jing Li4, Li Du1, Bowen Qin1, Zheng Zhang1, Aixin Sun5, Yequan Wang1∗ 1Beijing Academy of Artificial Intelligence, Beijing, China 2Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 3University of Electronic Science and Technology of China, Chengdu, China 4Harbin Institute of Technology, Shenzhen, China 5School of Computer Science and Engineering, Nanyang Technological University, Singapore # Abstract
2309.03852#0
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
1
Chengrun Yang* Xuezhi Wang Yifeng Lu Hanxiao Liu Quoc V. Le Denny Zhou Xinyun Chen* Google DeepMind Equal contribution # ABSTRACT Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/ google-deepmind/opro. (a) GSM8K (b) BBH movie_recommendation
2309.03409#1
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
1
# Abstract Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing eval- uations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of $100K, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. # Introduction
2309.03852#1
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
2
(a) GSM8K (b) BBH movie_recommendation Figure 1: Prompt optimization on GSM8K (Cobbe et al., 2021) and BBH (Suzgun et al., 2022) movie_recommendation. The optimization on GSM8K has pre-trained PaLM 2-L as the scorer and the instruction-tuned PaLM 2-L (denoted PaLM 2-L-IT) as the optimizer; the optimization on BBH movie_recommendation has text-bison as the scorer and PaLM 2-L-IT as the optimizer. Each dot is the average accuracy across all (up to 8) generated instructions in the single step, and the shaded region represents standard deviation. See Section 5 for more details on experimental setup. Table 1: Top instructions with the highest GSM8K zero-shot test accuracies from prompt optimization with different optimizer LLMs. All results use the pre-trained PaLM 2-L as the scorer.
2309.03409#2
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
2
# Introduction Large language models (LLMs) have demonstrated great successes in a wide range of tasks, par- ticularly in language processing [65; 64; 11; 30] and multimodal tasks [82; 33]. Throughout their development, many model architectures have been proposed and evaluated, including decoder- only structures (e.g., the GPT series [40; 41; 3] and the LLAMA series [58; 59]), encoder-only structures (e.g., BERT [10]), and encoder-decoder structures (e.g., T5 [44]), along with their vari- ants [29; 21; 55; 45]. Regardless of the differences in model architectures, all LLMs face the same challenge of high training cost. There is also a current trend suggesting using larger amounts of training data. For example, the LLAMA-1 [58] models use 1-1.4 T tokens for training, while LLAMA-2 [59] series use 2T tokens. A primary emphasis in LLM research hence is to find effective solutions to reduce training costs.
2309.03852#2
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
3
Source Instruction Baselines (Kojima et al., 2022) (Zhou et al., 2022b) Let’s think step by step. Let’s work this out in a step by step way to be sure we have the right answer. (empty string) Ours PaLM 2-L-IT PaLM 2-L gpt-3.5-turbo gpt-4 Take a deep breath and work on this problem step-by-step. Break this down. A little bit of arithmetic and a logical approach will help us quickly arrive at the solution to this problem. Let’s combine our numerical command and clear thinking to quickly and accurately decipher the answer. Acc 71.8 58.8 34.0 80.2 79.9 78.5 74.5 1 # Large Language Models as Optimizers 1 # INTRODUCTION Optimization is critical for all areas. Many optimization techniques are iterative: the optimization starts from an initial solution, then iteratively updates the solution to optimize the objective func- tion (Amari, 1993; Qian, 1999; Kingma & Ba, 2015; Bäck & Schwefel, 1993; Rios & Sahinidis, 2013; Reeves, 1993). The optimization algorithm typically needs to be customized for an individual task to deal with the specific challenges posed by the decision space and the performance landscape, especially for derivative-free optimization.
2309.03409#3
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
3
In this paper, we present our solutions to train an LLM at the 100B-parameter scale using a growth strategy inspired by our previous research [78]. “Growth” means that the number of parameters is not fixed, but expands from small to large along the training progresses. Figure 1 illustrates three typical scenarios for growth strategies. As the FLOPs of LLMs are approximately proportional to their *Corresponding author. Email: [email protected] †Indicates equal contribution. Technical Report. 2023-09-15 (v2) Technical Report of FLM-101B 1 # INTRODUCTION
2309.03852#3
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
4
In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to utilize large language models (LLMs) as optimizers. With the advancement of prompting techniques, LLMs have achieved impressive performance on a variety of domains (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022; Zhou et al., 2022a; Madaan et al., 2023; Bai et al., 2022; Chen et al., 2023e). Their ability to understand natural language lays out a new possibility for optimization: instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions. Optimization with LLMs enables quick adaptation to different tasks by changing the problem description in the prompt, and the optimization process can be customized by adding instructions to specify the desired properties of the solutions. To demonstrate the potential of LLMs for optimization, we first present case studies on linear regression and the traveling salesman problem, which are two classic optimization problems that underpin many others in mathematical optimization, computer science, and operations research. On small-scale optimization problems, we show that LLMs are able to find good-quality solutions simply through prompting, and sometimes match or surpass hand-designed heuristic algorithms.
2309.03409#4
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
4
Technical Report. 2023-09-15 (v2) Technical Report of FLM-101B 1 # INTRODUCTION ] 100 4 | 100 2 2 & so B 80 a a £ 60+ £ 60 @ @ € € € 407 € 40 SG SG a a 204 20 07 i} 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 tokens (Trillion) tokens (Trillion) (a) Without growth (b) Growth strategy 1: Cost saving equal to 50% | 1004 | 100 2 2 & 804 & 80 fy fy 2 604 2 60 @ @ € € & 407 fe 40 3g 3g a a 2075 20 04 i} 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 tokens (Trillion) tokens (Trillion) (c) Growth strategy 2: Cost saving less than 50% (d) Growth strategy 3: Cost saving greater than 50% Figure 1: An overview of different growth strategies.
2309.03852#4
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
5
Next, we demonstrate the ability of LLMs to optimize prompts: the optimization goal is to find a prompt that maximizes the task accuracy. Specifically, we focus on natural language processing tasks where both the task input and output are in text formats. LLMs are shown to be sensitive to the prompt format (Zhao et al., 2021; Lu et al., 2021; Wei et al., 2023; Madaan & Yazdanbakhsh, 2022); in particular, semantically similar prompts may have drastically different performance (Kojima et al., 2022; Zhou et al., 2022b; Zhang et al., 2023), and the optimal prompt formats can be model-specific and task-specific (Ma et al., 2023; Chen et al., 2023c). Therefore, prompt engineering is often important for LLMs to achieve good performance (Reynolds & McDonell, 2021). However, the large and discrete prompt space makes it challenging for optimization, especially when only API access to the LLM is available. Following prior work on continuous and discrete prompt optimization (Lester et al., 2021; Li & Liang, 2021; Zhou et al., 2022b; Pryzant et al., 2023), we assume a training set is available to compute the training accuracy as the objective value for optimization, and we show in experiments that optimizing the prompt for accuracy on a small training set is sufficient to reach high performance on the test set.
2309.03409#5
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
5
50% Figure 1: An overview of different growth strategies. number of parameters [19], the area under the parameter curve represents the computational cost of training. Figure 1(a) serves as a reference for the cost with a constant number of parameters (y-axis) w.r.t. the number of tokens (x-axis). Figure 1(b) illustrates a straightforward linear growth strategy, leading to a cost-saving of exactly 50%; Figure 1(c) showcases a modest growth strategy that reduces the cost by less than 50%; in contrast, Figure 1(d) represents an aggressive growth strategy, which reduces the cost by more than 50%. This analysis informs our decision to employ the aggressive growth strategy for maximal computational savings. In our model training, we achieve aggressive growth with an enhanced growth strategy originated in our previous work MSG [78], a strategy that achieves strict function-preserving when growing.
2309.03852#5
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
6
The prompt to the LLM serves as a call to the optimizer, and we name it the meta-prompt. Figure 3 shows an example. The meta-prompt contains two core pieces of information. The first piece is previously generated prompts with their corresponding training accuracies. The second piece is the optimization problem description, which includes several exemplars randomly selected from the training set to exemplify the task of interest. We also provide instructions for the LLM to understand the relationships among different parts and the desired output format. Different from recent work on using LLMs for automatic prompt generation (Zhou et al., 2022b; Pryzant et al., 2023), each optimization step in our work generates new prompts that aim to increase the test accuracy based on a trajectory of previously generated prompts, instead of editing one input prompt according to natural language feedback (Pryzant et al., 2023) or requiring the new prompt to follow the same semantic meaning (Zhou et al., 2022b). Making use of the full optimization trajectory, OPRO enables the LLM to gradually generate new prompts that improve the task accuracy throughout the optimization process, where the initial prompts have low task accuracies.
2309.03409#6
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
6
With a fixed $100K budget, we focus on 100B+ parameters. Although the Chinchilla laws [19] suggest that training a smaller model with more data may potentially result in higher scores on some benchmarks due to more sufficient training, we believe that verifying the feasibility of a growth strategy [15; 51; 6; 78] would be a new direction and beneficial to the community of LLM as well. This is because (i) larger models have higher upper bounds for capabilities that may not be reached by scaling only the training data [69], and (ii) data can be linearly scaled up with the budget, while a growth strategy has the potential for saving cost regardless of the amount of available data, if it turns out to be feasible. Existing studies such as [19] have not extensively investigated this area because they only consider the scenarios where model sizes are fixed through training.
2309.03852#6
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
7
We conduct comprehensive evaluation on several LLMs, including text-bison 1 and Palm 2-L in the PaLM-2 model family (Anil et al., 2023), as well as gpt-3.5-turbo and gpt-4 in the GPT 1Available here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/ models. 2 # Large Language Models as Optimizers objective function evaluator rN generated solutions return top solutions’ when finish meta-prompt LLM as solution-score pairs optimizer task description Figure 2: An overview of the OPRO framework. Given the meta-prompt as the input, the LLM generates new solutions to the objective function, then the new solutions and their scores are added into the meta-prompt for the next optimization step. The meta-prompt contains the solution-score pairs obtained throughout the optimization process, as well as a natural language description of the task and (in prompt optimization) a few exemplars from the task. See Figure 3 for a sample meta-prompt for prompt optimization.
2309.03409#7
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
7
Another critical challenge in LLM research is evaluation. Existing mainstream evaluations can be broadly grouped into two categories: knowledge evaluation (i.e., MMLU [17] and C-Eval [20]), and NLP tasks evaluation. Such evaluations may not fully reflect the model capability due to potential data leakage if some of the evaluation datasets were also used in model training. In addition, it is also difficult to distinguish whether the models remember a piece of knowledge or possess the capacity for reasoning and/or inference. Borrowing some ideas from Intelligence Quotient (IQ) tests (i.e., Perceptual Reasoning and Working Memory [67]), we consolidate another range of evaluations on LLMs, including symbolic mapping, rule understanding, pattern mining, and anti-interference evaluations. Symbolic mapping [71] evaluation tests the capability of LLMs in learning to use (less meaningful) symbols instead of (more meaningful) category labels for some forms of classification tasks. Rule understanding evaluation is to test the capability of understanding some given rules, and then to perform corresponding actions. Pattern mining (involving both induction and deduction), is often used in various levels of competition. It tests the pattern-finding capability (e.g., repetition of certain parts of a given input). Last but not least, anti-interference is an ability to recognize core 2 # Technical Report of FLM-101B 2 DESIGN OVERVIEW OF FLM-101B
2309.03852#7
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
8
model family 2. We optimize prompts on GSM8K (Cobbe et al., 2021) and Big-Bench Hard (Suzgun et al., 2022), which are reasoning benchmarks where prompting techniques have achieved remarkable performance breakthrough (Wei et al., 2022; Kojima et al., 2022; Suzgun et al., 2022). Starting from initial prompts with low task accuracies, we show that all LLMs in our evaluation are able to serve as optimizers, which consistently improve the performance of the generated prompts through iterative optimization until convergence (see Figure 1). In particular, while these LLMs generally produce instructions of different styles (see Table 1), with zero-shot prompting, their best generated instructions match the few-shot chain-of-thought prompting performance when applied to PaLM 2-L (Anil et al., 2023), outperforming the zero-shot performance with human-designed prompts by up to 8% on GSM8K. Additionally, we observe that the OPRO-optimized prompts transfer to other benchmarks of the same domain and also deliver notable performance gain. # 2 OPRO: LLM AS THE OPTIMIZER
2309.03409#8
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
8
2 # Technical Report of FLM-101B 2 DESIGN OVERVIEW OF FLM-101B information from noisy input [5; 84]. We believe the evaluations inspired by IQ tests are less likely to be affected by data leakage or memorization, hence providing another dimension for fair, objective, and reliable evaluations of LLMs. To summarize, the paper has made the following contributions. First, to the best of our knowledge, this is the first attempt to use a growth strategy to train an LLM with 100B+ parameters from scratch. Simultaneously, it is probably the lowest-cost model with 100B+ parameters, costing only 100,000 US dollars. Second, we address several instability issues via promising approaches for hyperparameter search, function-preserving growth, and improvements based on our FreeLM [25]. Our methodology holds potential benefits for the broader research community. Third, we conduct extensive evaluations, including both the commonly used knowledge-oriented benchmarks and the new range of evaluations inspired by IQ tests. Experimental results show that, despite its low training cost, FLM-101B is competitive and robust. Lastly, we release the model checkpoints, code, related tools, et al. to promote research on bilingual Chinese and English LLMs at the scale of 100B+. # 2 Design Overview of FLM-101B In this section, we provide an outline of FLM-101B, detailing its architecture, pre-training methods, and configuration specifics.
2309.03852#8
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
9
# 2 OPRO: LLM AS THE OPTIMIZER Figure 2 illustrates the overall framework of OPRO. In each optimization step, the LLM generates candidate solutions to the optimization task based on the optimization problem description and previously evaluated solutions in the meta-prompt. Then the new solutions are evaluated and added to the meta-prompt for the subsequent optimization process. The optimization process terminates when the LLM is unable to propose new solutions with better optimization scores, or a maximum number of optimization steps has reached. We first outline the desired features of LLMs for optimization, then describe the key design choices based on these desirables. 2.1 DESIRABLES OF OPTIMIZATION BY LLMS Making use of natural language descriptions. The main advantage of LLMs for optimization is their ability of understanding natural language, which allows people to describe their optimization tasks without formal specifications. For instance, in prompt optimization where the goal is to find a prompt that optimizes the task accuracy, the task can be described with a high-level text summary along with input-output examples. Trading off exploration and exploitation. The exploration-exploitation trade-off is a fundamental challenge in optimization, and it is important for LLMs serving as optimizers to balance these two competing goals. This means that the LLM should be able to exploit promising areas of the search
2309.03409#9
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
9
# 2 Design Overview of FLM-101B In this section, we provide an outline of FLM-101B, detailing its architecture, pre-training methods, and configuration specifics. # 2.1 Architecture The architecture of an LLM significantly impacts its capabilities. Current researches [80; 3] under- score the high costs associated with experimenting on diverse architectures. Hence, it is more suitable to select an architecture with great potential for cost effectiveness and model capability. Backbone. Among the many existing model architectures, we adopt FreeLM [25] as the backbone for our models, with modifications. FreeLM is based on GPT [41], a transformer-like architecture with a decoder-only configuration known for its exceptional performance. Different from GPT, FreeLM features two pre-training objectives: the language objective and the teacher objective (Section 2.2). We preserve the GPT-style transformer block designs, including the Pre-LayerNorm and the additional LayerNorm after the last transformer layer. We employ the tokenizer derived from GPT-4, characterized by a vocabulary size of 100, 256.
2309.03852#9
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
10
2Available here: http://openai.com/api/. This work uses gpt-3.5-turbo-0613 and gpt-4-0613. 3 # Large Language Models as Optimizers space where good solutions are already found, while also exploring new regions of the search space so as to not miss potentially better solutions. 2.2 META-PROMPT DESIGN As the input to the LLM that acts as the optimizer, the meta-prompt contains the following two essential parts. Optimization problem description. The first part is the text description of the optimization problem, including the objective function and solution constraints. For example, for prompt optimization, the LLM can be instructed to “generate a new instruction that achieves a higher accuracy”, and we denote such instructions in the meta-prompt as meta-instructions. We can also provide customized meta-instructions as an informal regularization of the generated solutions, such as “the instruction should be concise and generally applicable”.
2309.03409#10
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
10
Integration of xPos. To enhance long sequence modeling, we integrate the Extrapolatable Position Embedding (xPos) [56] in FLM-101B. This innovation draws inspiration from the principles of RoPE [54], which aims to improve the length extrapolation ability. By introducing an exponential decay into the rotation matrix, xPos strives to rectify this hurdle. To the best of our knowledge, FLM-101B is the largest model to date that incorporates the xPos technology. Model Sizes. Benefiting from the proposed growth strategy, the FLM series produces three models with 16B, 51B, and 101B (i.e., FLM-101B) parameters in a single training. The training process is carried out in a sequential manner, starting from a smaller model (i.e., 16B) and progressively growing to larger ones (i.e., 51B and 101B). # 2.2 Pre-Training Setup
2309.03852#10
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
11
Optimization trajectory. Besides understanding natural language instructions, LLMs are also shown to be able to recognize patterns from in-context demonstrations (Wei et al., 2023; Madaan & Yazdanbakhsh, 2022; Mirchandani et al., 2023). Our meta-prompt makes use of this property and instructs the LLM to leverage the optimization trajectory for generating new solutions. Specifically, the optimization trajectory includes past solutions paired with their optimization scores, sorted in the ascending order. Including optimization trajectory in the meta-prompt allows the LLM to identify similarities of solutions with high scores, encouraging the LLM to build upon existing good solutions to construct potentially better ones without the need of explicitly defining how the solution should be updated. 2.3 SOLUTION GENERATION At the solution generation step, the LLM generates new solutions with the meta-prompt as input. The following are the key optimization challenges we address in this stage.
2309.03409#11
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03852
11
# 2.2 Pre-Training Setup FLM-101B. By design, FLM-101B is an English-Chinese bilingual model pre-trained with causal language modeling. It mixes English and Chinese corpora at a ratio of approximately 53.5% : 46.5% for language modeling. Inspired by the finding that instruction data can augment LLMs’ comprehension capabilities [37], we integrate multi-task instructionally prompted data: OIG (Open Instruction Generalist) 1 and COIG (Chinese Open Instruction Generalist) 2, in the pre-training stage. eFLM-16B. To evaluate the effect of using domain-specific knowledge data (Section 4.2), we apply the FreeLM teacher signals [25] to enhance FLM. Due to computational cost, we incorporate the teacher signals only in the smallest 16B model. This knowledge-enhanced FLM-16B is named eFLM-16B. 1https://huggingface.co/datasets/laion/OIG 2https://huggingface.co/datasets/BAAI/COIG 3 DESIGN different growth Tokens Time (million) (day) 4.72 9.63 4.72 5.37 4.31 6.54 language by teacher and two specialized objective into the stability when the classification B (U+1F621) 245.37 39.64 26.54
2309.03852#11
FLM-101B: An Open LLM and How to Train It with $100K Budget
Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing evaluations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of 100K US dollars, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
http://arxiv.org/pdf/2309.03852
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
cs.CL, cs.AI
null
null
cs.CL
20230907
20230917
[ { "id": "2306.15595" }, { "id": "1502.05698" } ]
2309.03409
12
At the solution generation step, the LLM generates new solutions with the meta-prompt as input. The following are the key optimization challenges we address in this stage. Optimization stability. In the optimization process, not all solutions achieve high scores and monotonically improve over prior ones. Due to the sensitivity of in-context learning to the prompt, LLM output can be drastically affected by low-quality solutions in the input optimization trajectory, especially at the beginning when the solution space has not been adequately explored. This sometimes results in optimization instability and large variance. To improve stability, we prompt the LLM to generate multiple solutions at each optimization step, allowing the LLM to simultaneously explore multiple possibilities and quickly discover promising directions to move forward. Exploration-exploitation trade-off. We tune the LLM sampling temperature to balance between exploration and exploitation. A lower temperature encourages the LLM to exploit the solution space around the previously found solutions and make small adaptations, while a high temperature allows the LLM to more aggressively explore solutions that can be notably different. # 3 MOTIVATING EXAMPLE: MATHEMATICAL OPTIMIZATION
2309.03409#12
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]
2309.03409
13
# 3 MOTIVATING EXAMPLE: MATHEMATICAL OPTIMIZATION We first demonstrate the potential of LLMs in serving as optimizers for mathematical optimization. In particular, we present a case study on linear regression as an example of continuous optimization, and on the Traveling Salesman Problem (TSP) as an example of discrete optimization. On both tasks, we see LLMs properly capture the optimization directions on small-scale problems merely based on the past optimization trajectory provided in the meta-prompt. 3.1 LINEAR REGRESSION In linear regression problems, the goal is to find the linear coefficients that probabilistically best explain the response from the input variables. We study the setting in which the independent and dependent variables X and y are both one-dimensional and an intercept b is present, so that there are two one-dimensional variables w, b to optimize over. In a synthetic setting, we sample ground truth values for one-dimensional variables wtrue and btrue, and generate 50 data points by y = wtruex + btrue + ϵ, in which x ranges from 1 to 50 and ϵ is the standard Gaussian noise. Our 4 # Large Language Models as Optimizers
2309.03409#13
Large Language Models as Optimizers
Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to prompt optimization where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.
http://arxiv.org/pdf/2309.03409
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, Xinyun Chen
cs.LG, cs.AI, cs.CL
42 pages, 26 figures, 15 tables. Code at https://github.com/google-deepmind/opro
null
cs.LG
20230907
20231207
[ { "id": "2205.12548" }, { "id": "2104.08786" }, { "id": "2302.12170" }, { "id": "2307.04721" }, { "id": "2302.04761" }, { "id": "2305.10403" }, { "id": "2309.16797" }, { "id": "2304.03262" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2210.17041" }, { "id": "2206.08896" }, { "id": "2305.17126" }, { "id": "2203.07281" }, { "id": "2302.03668" }, { "id": "2103.10385" }, { "id": "2304.12244" }, { "id": "2309.08532" }, { "id": "2305.03495" }, { "id": "2302.14838" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2203.11171" }, { "id": "2306.13588" }, { "id": "2303.17071" }, { "id": "2212.08073" }, { "id": "1608.01413" }, { "id": "2209.07686" }, { "id": "2012.15723" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "1705.04146" }, { "id": "2305.16291" }, { "id": "2306.09896" }, { "id": "2104.06599" }, { "id": "2306.14308" }, { "id": "2306.03082" }, { "id": "2302.07459" }, { "id": "2205.10625" }, { "id": "2205.11916" }, { "id": "2303.16749" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2104.08691" }, { "id": "2303.03846" }, { "id": "2101.00190" } ]