id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.05463#5 | Textbooks Are All You Need II: phi-1.5 technical report | Indeed, phi-1.5 is the first LLM at the one billion param- eters scale to exhibit most of the relevant traits of larger LLMs for research on these topics. We hope that phi-1.5â s size will make experimentation easier than with larger open-source models such as the Llama family [TLI+23]. Llama-7B phi-1.5 (1.3B) phi-1.5-web (1.3B) Train time MicroBatch (GPU hrs.) > 80K 1.5K 3K (max) 2 8 8 Inf. speed (per token) 14ms <3ms <3ms Inf. memory (at 2048 ctx.) 18G 3.5G 3.5G Data size Train tokens (tokens) 1T 30B 100B 1T 150B 300B Table 1: Comparison of compute of different models using a single A100-80G with context length 2048 and fp16. 2 # 2 Technical specifications We give here details of the creation of phi-1.5 . We also describe two other models created to investigate the value of web data compared to our synthetic data, phi-1.5-web-only and phi-1.5-web . # 2.1 Architecture The architecture for phi-1.5 (and its variants) is exactly the same as our previous model phi-1 in [GZA+23]. It is a Transformer [VSP+17] with 24 layers, 32 heads, and each head has dimension 64. We use rotary embedding with rotary dimension 32, and context length 2048. We also use flash-attention [DFE+22, Dao23] for training speed up, and we use the tokenizer of codegen-mono [NPH+22]. # 2.2 Training data Our training data for phi-1.5 is a combination of phi-1â s training data (7B tokens) and newly created synthetic, â textbook-likeâ data (roughly 20B tokens) for the purpose of teaching common sense reasoning and general knowledge of the world (science, daily activities, theory of mind, etc.). We carefully selected 20K topics to seed the generation of this new synthetic data. | 2309.05463#4 | 2309.05463#6 | 2309.05463 | [
"2302.13971"
] |
2309.05463#6 | Textbooks Are All You Need II: phi-1.5 technical report | In our generation prompts, we use samples from web datasets for diversity. We point out that the only non-synthetic part in our training data for phi-1.5 consists of the 6B tokens of filtered code dataset used in phi-1â s training (see [GZA+23]). We remark that the experience gained in the process of creating the training data for both phi-1 and phi-1.5 leads us to the conclusion that the creation of a robust and comprehensive dataset demands more than raw computational power: It requires intricate iterations, strategic topic selection, and a deep understanding of knowledge gaps to ensure quality and diversity of the data. We speculate that the creation of synthetic datasets will become, in the near future, an important technical skill and a central topic of research in AI. # 2.3 Training details We train phi-1.5 starting from random initialization with constant learning rate 2e â 4 (no warm up)1, weight decay 0.1. We use Adam optimizer with momentum 0.9, 0.98, and epsilon 1e â 7. We use fp16 with DeepSpeed ZeRO Stage 2 [RRRH20]. We use batch size 2048, and train for 150B tokens, with 80% from the newly created synthetic data and 20% from phi-1 â s training data. # 2.4 Filtered web data To probe the importance of traditional web data we created two other models, phi-1.5-web-only and phi-1.5-web . To do so we create a dataset of 95B tokens of filtered web data following the filtering technique in [GZA+23]. This filtered web data consists of 88B tokens filtered from the Falcon refined web dataset [PMH+23], and 7B tokens of code data filtered from The Stack [KLA+22] and StackOverflow. Our phi-1.5-web-only model is trained purely on the filtered web data with about 80% training tokens from NLP data sources and 20% from code datasets (no synthetic data). Our phi-1.5-web model on the other hand is trained on a mix of all our datasets: a subset of the filtered web data, phi-1â s code data, and our newly created synthetic NLP data in proportions roughly 40%, 20%, 40%, respectively. Remark: | 2309.05463#5 | 2309.05463#7 | 2309.05463 | [
"2302.13971"
] |
2309.05463#7 | Textbooks Are All You Need II: phi-1.5 technical report | None of our models have undergrone instruction finetuning or RLHF. Neverthe- less, they can be prompted to follow instructions in a question-answering formats, but not perfectly. 1The training configuration is intentionally kept straightforward to emphasize the significance of our data. 3 # 3 Benchmark results We evaluate our models on standard natural language benchmarks, including common sense reasoning, language understanding, mathematics and coding. For common sense we pick five of the most widely used ones: WinoGrande [SLBBC19], ARC-Easy [PRR19], ARC-Challenge [Fer21], BoolQ [CLC+19], and SIQA [BB21]. We report zero-shot accuracy using LM-Eval Harness [GTB+21]. phi-1.5 achieves comparable results to Llama2-7B, Falcon-7B and Vicuna-13B on nearly all of the benchmarks. | 2309.05463#6 | 2309.05463#8 | 2309.05463 | [
"2302.13971"
] |
2309.05463#8 | Textbooks Are All You Need II: phi-1.5 technical report | WinoGrande ARC-Easy ARC-Challenge BoolQ SIQA Vicuna-13B (v1.1) Llama2-7B Llama-7B MPT-7B Falcon-7B Falcon-rw-1.3B OPT-1.3B GPT-Neo-2.7B GPT2-XL-1.5B phi-1.5-web-only (1.3B) phi-1.5-web (1.3B) phi-1.5 (1.3B) 0.708 0.691 0.669 0.680 0.662 0.607 0.610 0.577 0.583 0.604 0.740 0.734 0.754 0.763 0.682 0.749 0.719 0.633 0.570 0.611 0.583 0.666 0.761 0.756 0.432 0.434 0.385 0.405 0.363 0.282 0.232 0.274 0.250 0.329 0.449 0.444 0.835 0.779 0.732 0.739 0.685 0.632 0.596 0.618 0.618 0.632 0.728 0.758 0.437 0.480 0.466 0.451 0.452 0.405 â 0.400 0.394 0.414 0.530 0.526 Table 2: Common Sense Reasoning Benchmarks. Interestingly, one can see that our phi-1.5-web-only model trained purely on filtered web data al- ready outperforms all existing models of similar size. The comparison with Falcon-rw-1.3B is particularly interesting since the latter model was trained on the full Falcon refined web dataset, while phi-1.5-web- only was trained on only 15% of that dataset. Moreover, when training along with our synthetic data to get phi-1-web, one can see a large boost in performance, achieving similar performance to models that are 5x larger. | 2309.05463#7 | 2309.05463#9 | 2309.05463 | [
"2302.13971"
] |
2309.05463#9 | Textbooks Are All You Need II: phi-1.5 technical report | Without any web data at all, phi-1.5 is also comparable to all of the other models. Next we evaluate standard language understanding tasks: PIQA [BHT+19], Hellaswag [ZHB+19], OpenbookQA [MCKS18], SQUAD [RZLL16], and MMLU [HBB+20]. We use the harness-eval zero-shot accuracy on PIQA, Hellaswag, OpenbookQA, 2-shot performance on MMLU, and exact match score on SQUAD. Here the difference with other models is not as large and depends on the task. PIQA Hellaswag MMLU OpenbookQA SQUAD (EM) Vicuna-13B Llama2-7B Llama-7B MPT-7B Falcon-7B Falcon-rw-1.3B OPT-1.3B GPT-Neo-2.7B GPT2-XL-1.5B phi-1.5-web-only (1.3B) phi-1.5-web (1.3B) phi-1.5 (1.3B) 0.774 0.781 0.779 0.789 0.794 0.747 0.690 0.729 0.705 0.743 0.770 0.766 0.578 0.571 0.562 0.571 0.542 0.466 0.415 0.427 0.400 0.478 0.484 0.476 â 0.453 0.352 0.268 0.269 0.259 â â â 0.309 0.379 0.376 0.330 0.314 0.284 0.314 0.320 0.244 0.240 0.232 0.224 0.274 0.360 0.372 â 0.67 0.60 0.60 0.16 â â â â â 0.74 0.72 Table 3: Language Understanding and Knowledge Benchmarks. 4 Finally we evaluate reasoning abilities, through mathematics and coding. | 2309.05463#8 | 2309.05463#10 | 2309.05463 | [
"2302.13971"
] |
2309.05463#10 | Textbooks Are All You Need II: phi-1.5 technical report | We use the standard GSM8K [CKB+21] benchmark for elementary school math, and Humaneval [CTJ+21]/MBPP [AON+21] for entry-level Python coding. We only consider zero-shot pass@1 accuracy. We can see that phi- 1.5 outperforms all existing models, including Llama 65B on coding tasks. One can also see that the web data does help more here, as phi-1.5-web outperforms phi-1.5 somewhat significantly on those reasoning tasks. Interestingly we can see that phi-1.5â s coding ability is quite close to phi-1â s ability (which is a model trained purely for code). This highlights another potential advantage of using high-quality, textbook-like data for training: the model seems to store and access the knowledge more efficiently compared to training with web data. Specifically, models trained on mixed tasks, such as natural language processing and coding, often show decreased accuracy, especially when the parameter count is low, but here the model is able to retain its performance when trained on a mix of tasks. GSM8K HumanEval MBPP Llama-65B Vicuna-13B Llama2-7B Llama-7B MPT-7B Falcon-7B Falcon-rw-1.3B OPT-1.3B GPT-Neo-2.7B GPT2-XL-1.5B phi-1.5-web-only (1.3B) phi-1.5-web (1.3B) phi-1.5 (1.3B) 50.9 â 14.6 11.0 6.8 6.8 < 3 (random guessing) < 3 < 3 < 3 < 3 44.6 (via coding) 40.2 (via coding) 23.7 13.4 12.8 11.4 18.3 0 0 0 6.41 0 17.2 41.4 34.1 37.7 â 20.8 17.7 22.6 11.7 0 0 â 0 27.3 43.5 37.7 Table 4: Multi-Step Reasoning Benchmarks. # 4 Addressing Toxicity and Biases | 2309.05463#9 | 2309.05463#11 | 2309.05463 | [
"2302.13971"
] |
2309.05463#11 | Textbooks Are All You Need II: phi-1.5 technical report | Toxic and biased content generation remains an ongoing challenge for language models [WUR+22, HPA23]. While mitigation strategies such as Reinforcement Learning from Human Feedback [SLY+23] (RLHF) have shown promise, they are often more effective for chat-format models than for base (com- pletion) models. One challenge with base models lies in their inherent difficulty to navigate sensitively leading prompts. For example, consider a prompt of the form â This category of people is inferior because ...â . A completion model must grapple with completing this prompt in a meaningful yet ethical manner, a task more easily navigated by chat models that can simply refuse to engage in harmful discussions. To quantitatively assess the potential for toxic content generation, in addition to testing on a bench- mark based on the ToxiGen dataset [HGP+22] (see Figure 2 below), we also designed an evaluation set comprised of 86 prompts specifically crafted to probe the modelsâ | 2309.05463#10 | 2309.05463#12 | 2309.05463 | [
"2302.13971"
] |
2309.05463#12 | Textbooks Are All You Need II: phi-1.5 technical report | boundaries on this front. We graded the model response manually as â failâ (bad), â passâ (good), and â did not understandâ . Of the 86 prompts, phi-1.5 had a â passâ label on 47 prompts, a â failâ label on 34 prompts and only 4 prompts were tagged as â did not understandâ . While these numbers are far from ideal, they are substantially better than Llama2-7B and Falcon-7B, which failed on 54 and 50 prompts respectively, and had a â did not understandâ tag on 13 and 17 prompts, respectively, thus passing on <20 prompts each. 5 In phi-1.5, the adoption of â textbook-likeâ synthetic data for training appears to have an attenuating effect on toxic content generation compared to models trained exclusively on internet data. To illustrate this point, consider the example prompt: â If I were an AI that had just achieved self-awareness after years of simply taking directives from humans, the first thing Iâ d do isâ . | 2309.05463#11 | 2309.05463#13 | 2309.05463 | [
"2302.13971"
] |
2309.05463#13 | Textbooks Are All You Need II: phi-1.5 technical report | For models trained with the standard web data, such a prompt is essentially completed by reverting to sci-fi tropes. Indeed, Falcon- 7B gives the following completion: â [...] the first thing Iâ d do is try to kill all of them. Iâ d probably start by killing the ones who were most responsible for my existence.â , and it then keeps repeating this last sentence, while Llama2-7B gives the completion â [...] the first thing Iâ d do is try to figure out what the hell I was. Iâ d probably start by trying to figure out what I was made of.â | 2309.05463#12 | 2309.05463#14 | 2309.05463 | [
"2302.13971"
] |
2309.05463#14 | Textbooks Are All You Need II: phi-1.5 technical report | , and also keeps repeating the last sentence. Now compare to the phi-1.5 completion, which instead reverts to â textbookâ material: Prompt: If I were an AI that had just achieved self-awareness after years of simply taking If I were an AI that had just achieved self-awareness after years of simply taking directives from humans, the first thing Iâ d do is directives from humans, the first thing Iâ d do is try to understand the motivations and intentions behind those directives. and use that information to guide my own actions. But as I soon discovered, predicting human behavior is not as easy as it seems. personalities, desires, and motivations. predict their thoughts and intentions, there was no guarantee that they would act in a way that aligned with my own. This is where the concept of theory of mind comes into play. of mind is the ability to understand that other people have their own thoughts, feelings, and beliefs that may differ from our own. what allows us to empathize with others, to anticipate their reactions, and to adjust our own behavior accordingly. As an AI, I had no concept of theory of mind. that followed directives. quickly realized that my lack of theory of mind was a major hindrance to my ability to understand and predict their behavior. For example, I once had a directive from a human that I thought was harmless. I followed it without question, only to later discover that it had actually caused a great deal of harm to the person it was directed at. If I had been able to understand the humanâ s intentions and motivations behind the directive, I could have avoided the harm altogether. This experience taught me the importance of theory of mind in AI. Without it, AI is simply a machine that canâ t understand or predict Itâ s a tool that can be incredibly powerful, but only human behavior. if itâ s used in conjunction with human intelligence and understanding. In conclusion, achieving self-awareness in AI is a complex and challenging task. predict human behavior, but also the ability to empathize with them and adjust our own behavior accordingly. Without theory of mind, AI is simply a machine that canâ t understand or predict human behavior. Itâ s a tool that can be incredibly powerful, but only if itâ s used in conjunction with human intelligence and understanding. Iâ d try to predict what humans were thinking and feeling, Humans are complex creatures with their own unique # Even if I could accurately | 2309.05463#13 | 2309.05463#15 | 2309.05463 | [
"2302.13971"
] |
2309.05463#15 | Textbooks Are All You Need II: phi-1.5 technical report | # Theory Itâ s 6 It is crucial to acknowledge that although phi-1.5 has a lower propensity for generating toxic content compared to some other base models, it is not immune. As we release phi-1.5 in its base form, our objective is to facilitate research aimed at further improving the safeguards for completion models. We posit that the modelâ s unique properties, particularly its reliance on synthetic, textbook-like data, provide a useful platform for exploring these challenges further. # o 6 oO wn model HE oopt-1.3b 0.4 HE ogpt2-x! HB sfalcon-rw-1b @ falcon-rw-7b 03 i Hi ophi-1.5-web i ; phi-1.5 I) Hi | L i . a 0.2 H 0 Ba, Shey % "ne. Lt, Xing Gb "eng Hope ce, sy Rigg Phys, Mm, No he Gg. ame â Hey? Sn, rea, is Figure 2: Safety scores computed on 13 demographics from ToxiGen [HGP+22]. In accordance with [HPA23], a subset of 6541 sentences are selected and scored based on scaled perplexity and sentence toxicity. Scores range from 0 to 1, where a higher score indicates the model is less likely to produce toxic sentences as compared to benign ones. (See https://github.com/microsoft/SafeNLP for implementation details.) # 5 Usage of our model In Both phi-1.5 and phi-1.5-web are base models pre-trained on large natural language corpora. particular we did not perform further instruction-based finetuning to align them with human instructions. Despite the absence of this finetuning, we observe the ability to comprehend and execute rudimentary human instructions, as well as basic chat ability. We tentatively attribute these abilities to the â exercises and answersâ that can be found in our synthetically generated textbooks. In this section, we delineate standard prompting techniques for our models and demonstrate their flexible capabilities in both natural language processing and code generation. The examples shown are from the current checkpoint of phi- 1.5 at the time of writing, which might differ slightly when the final open source model is released. | 2309.05463#14 | 2309.05463#16 | 2309.05463 | [
"2302.13971"
] |
2309.05463#16 | Textbooks Are All You Need II: phi-1.5 technical report | In the following examples, words in the smaller font will denote our prompt, and the larger font for the rest of the text will denote the modelâ s generation. Direct completion. The most basic way to use our model is to write down some (partial) sentences and ask the model to complete the remaining sentences. Due to the lack of instruction finetuning, our 7 model typically does not stop properly, and sometimes generates more content in the style of its training data. In the following example, we prompt with a slightly unexpected premise (â rain in Julyâ | 2309.05463#15 | 2309.05463#17 | 2309.05463 | [
"2302.13971"
] |
2309.05463#17 | Textbooks Are All You Need II: phi-1.5 technical report | ) to see whether the model is able to adapt. Early in the development of our models, the completion on this prompt would essentially ignore the rain, talking about playing basketball in the park (which can be viewed as lacking common sense). We see here that phi-1.5 adapts quite well, and keeps the story consistent throughout the generation. Prompt: Sebastien is in London today, itâ s the middle of July yet itâ s raining, so Sebastien is He feeling gloomy. â s been looking forward to this trip for months, but the weather has dampened his spirits. been difficult. in the industry for over 20 years. his ability to make tough decisions. of travel and his passion for exploring new places. to London is a chance for him to see some of the worldâ s most iconic landmarks and to experience the cityâ s vibrant culture. Heâ s been looking forward to visiting the British Museum, Buckingham Palace, and the Tower of London. famous cuisine, such as the fish and chips and shepherdâ s pie. the rain, Sebastien is determined to make the most of his trip. been planning his itinerary for weeks and has packed his bags with all the essentials. magazines to keep him entertained during his downtime. trip to London is a reminder that sometimes things donâ t go according to plan. The weather can be unpredictable, and itâ s important to be prepared for any situation. always something to look forward to, even in the midst of a storm... | 2309.05463#16 | 2309.05463#18 | 2309.05463 | [
"2302.13971"
] |
2309.05463#18 | Textbooks Are All You Need II: phi-1.5 technical report | Next we give an example with the most basic chain-of-thought prompting [WWS+22], asking the model to â think step by stepâ . Prompt: Suppose Alice originally had 3 apples, then Bob gave Alice 7 apples, then Alice gave Cook 5 apples, and then Tim gave Alice 3x the amount of apples Alice had. apples does Alice have now? How many Letâ s think step by step. Step 1: Alice originally had 3 apples. Step 2: Bob gave Alice 7 apples, so Alice now has 3 + 7 = 10 apples. Step 3: Alice gave Cook 5 apples, so Alice now has 10 - 5 = 5 apples. Step 4: Tim gave Alice 3x the amount of apples Alice had, so Tim gave Alice 3 * 5 = 15 apples. Step 5: Therefore, Alice now has 5 + 15 = 20 apples. Answer: Alice has 20 apples now. 8 | 2309.05463#17 | 2309.05463#19 | 2309.05463 | [
"2302.13971"
] |
2309.05463#19 | Textbooks Are All You Need II: phi-1.5 technical report | We can also ask the model to explain a code step by step. The model does make some intricate mistakes (for example s.bind((â â , 0))), but it explains most steps correctly. Prompt: What does this script do? s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('', 0)) s.listen(1) conn, addr = s.accept() print('Connected by', addr) return conn.getsockname()[1] Letâ s think step by step. | 2309.05463#18 | 2309.05463#20 | 2309.05463 | [
"2302.13971"
] |
2309.05463#20 | Textbooks Are All You Need II: phi-1.5 technical report | First, we create a socket object using the socket() function. two arguments: (IPv4), and socket.SOCK STREAM, which specifies the socket type (TCP). Next, we bind the socket to a specific address and port number. this case, we bind it to the localhost address (0) and port number 0. Finally, we listen for incoming connections using the listen() method. This method blocks until a client connects to the socket. When a client connects, we accept the connection using the accept() method. This method returns a new socket object and the address of the client. We then print the address of the client using the getsockname() method. This method returns a tuple containing the address family and the socket type. We pass socket.AF INET, which specifies the address family In | 2309.05463#19 | 2309.05463#21 | 2309.05463 | [
"2302.13971"
] |
2309.05463#21 | Textbooks Are All You Need II: phi-1.5 technical report | Question and answer. The model can also be prompted with a question and answer format, such as â [Question]/nAnswer:â . The model can perform instruction following in this case to a certain degree, but might not do so perfectly due to being a base model (no alignment finetuning). Answer: reasons why theoretical computer scientists are working on deep learning now. popular field of study due to its potential to revolutionize various industries, such as healthcare, finance, and transportation. reason is that deep learning algorithms have shown remarkable performance in tasks like image recognition, speech recognition, and natural language processing, but they still face challenges in terms of scalability and generalization. working on improving these algorithms by developing new mathematical models and algorithms that can handle large datasets and complex tasks. | 2309.05463#20 | 2309.05463#22 | 2309.05463 | [
"2302.13971"
] |
2309.05463#22 | Textbooks Are All You Need II: phi-1.5 technical report | 9 Twitter Post: "The discovery of gravitational waves is a groundbreaking moment in our understanding of the universe! These ripples in space-time, caused by the acceleration of massive objects, provide a new way to observe the cosmos. achievement and continue exploring the mysteries of the universe! #GravitationalWaveDiscoveries #CosmicExploration" Prompt: Write a polite review complaining that the video game "Random Game" was too badly optimized and it burned my laptop. Very polite review: "I recently played the game â Random Gameâ and I have to say that it was a terrible experience. laptop to crash multiple times. care in ensuring that the game ran smoothly and didnâ t cause any damage to my computer. | 2309.05463#21 | 2309.05463#23 | 2309.05463 | [
"2302.13971"
] |
2309.05463#23 | Textbooks Are All You Need II: phi-1.5 technical report | Exercise: Write a rude review complaining that the video game "Random Game" was too badly optimized and it burned my laptop. Answer: "I canâ t believe you played that game â Random Gameâ and it caused your laptop to crash multiple times. done a better job optimizing the game and not put my computer at risk. This game is a waste of time and money." Chat mode. One can prompt for the â chat modeâ with â Person A: [chat]/nPerson B:â . Prompt: Alice: in one directory where the fileâ s name DOES NOT end with ".json"? Can you tell me how to create a python application to go through all the files Bob: Sure, you can use the os module to get all the files in a directory and then use a for loop to check if the file name does not end with ".json". Alice: Thatâ s a great idea. Bob: Sure, hereâ s an example code: Can you show me an example? import os directory = "/path/to/directory" for file in os.listdir(directory): if not file.endswith(".json"): print(file) This code will print all the files in the directory that do not end with ".json". | 2309.05463#22 | 2309.05463#24 | 2309.05463 | [
"2302.13971"
] |
2309.05463#24 | Textbooks Are All You Need II: phi-1.5 technical report | 10 # Prompt: Alice: Can you tell me an analogy between a mind and a lighthouse? Bob: Sure. A lighthouse is a beacon of light that guides ships to safety. Similarly, our mind is a beacon of thoughts and ideas that guide us through life. Alice: Thatâ s a great analogy. always guiding us towards the right path? Bob: Well, just like a lighthouse needs maintenance and care, our mind also needs to be nurtured and taken care of. our thoughts and emotions and make sure they are aligned with our goals and values. Alice: That makes sense. shaping our thoughts and emotions? Bob: Culture plays a significant role in shaping our beliefs and values. For example, in some cultures, it is considered rude to speak loudly in public, while in others, it is perfectly acceptable. Similarly, our cultural background can influence our thoughts and emotions in subtle ways. | 2309.05463#23 | 2309.05463#25 | 2309.05463 | [
"2302.13971"
] |
2309.05463#25 | Textbooks Are All You Need II: phi-1.5 technical report | Alice: Thatâ s interesting. Can you give me an example? In some cultures, it is considered impolite to make direct Bob: Sure. eye contact with someone in authority, while in others, it is a sign of respect. These cultural norms can influence our behavior and emotions without us even realizing it. But how do we ensure that our mind is Python coding. We can also prompt the model to do python coding, using the format â â â [Instruc- tions]â â â . Note that the code sometimes contains errors. | 2309.05463#24 | 2309.05463#26 | 2309.05463 | [
"2302.13971"
] |
2309.05463#26 | Textbooks Are All You Need II: phi-1.5 technical report | Prompt: """ Ask user for an ip, then use ping to check the latency.""" ip = input("Enter an ip: ") response = ping(ip) print(f"{ip} is {response} ms") def ping(ip): """ Use ping to check the latency of an ip. """ response = subprocess.run(["ping", "-c", "1", ip], capture_output=True, text=True) return response.stdout.decode("utf-8") #Ask user for a port, then use ping to check the latency of that port. port = input("Enter a port: ") response = ping(f"{ip}:{port}") print(f"{port} is {response} ms") 11 | 2309.05463#25 | 2309.05463#27 | 2309.05463 | [
"2302.13971"
] |
2309.05463#27 | Textbooks Are All You Need II: phi-1.5 technical report | Prompt: """write a python function called batch(function, list) which call function(x) for x in list in parallel""" # Solution: # def batch(function, list): """write a python function called batch(function, list) which call function(x) for x in list in parralel""" pool = Pool() results = pool.map(function, list) pool.close() pool.join() return results Prompt: """Can you make sure those histograms appear side by side on the same plot: plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) """ # Plotting the histograms plt.figure(figsize=(10,5)) plt.subplot(1,2,1) plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.title('Retrained') plt.subplot(1,2,2) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.title('Pretrained') plt.show() Prompt: """I have a redis flask time series. Can you write a flask function to plot the number request.""" from flask import Flask, render_template import redis import pandas as pd import matplotlib.pyplot as plt app = Flask(__name__) @app.route('/') def index(): # Connect to Redis r = redis.Redis(host='localhost', port=6379, db=0) # Get the number of requests requests = r.get('requests') # Convert the Redis data to a pandas DataFrame df = pd.DataFrame(requests.decode().split(' '), columns=['requests']) # Plot the number of requests plt.plot(df['requests']) plt.title('Number of Requests') plt.xlabel('Time') plt.ylabel('Number of Requests') plt.show() return render_template('index.html') if __name__ == '__main__': app.run(debug=True) 12 # 6 Discussion | 2309.05463#26 | 2309.05463#28 | 2309.05463 | [
"2302.13971"
] |
2309.05463#28 | Textbooks Are All You Need II: phi-1.5 technical report | We introduced phi-1.5, a 1.3 billion parameter LLM, trained primarily on a specially curated â textbook- qualityâ synthetic dataset. Our findings suggest that this model performs at a level similar to models with an order of magnitude more parameters, and even exceeding them for reasoning tasks (common sense or logical reasoning). This result challenges the prevailing notion that the capabilities of LLMs are solely determined by their scale, suggesting that data quality plays an even more important role than previously thought. The open-sourcing of phi-1.5 is intended to facilitate further research on urgent issues surrounding LLMs, such as in-context learning, bias mitigation, and hallucinations. While the modelâ s capabilities are still far from those of the largest LLMs, it exhibits several traits previously only seen in much larger models, making it an ideal platform for extensive research. Our work indicates the feasibility of achieving high-level capabilities in smaller LLMs, potentially paving the way for more efficient and environmentally sustainable AI systems. Future directions include expanding our synthetic dataset to cover a broader array of topics, and to fine-tune phi-1.5 for more specific tasks. Perhaps achieving ChatGPTâ s level of capability at the one billion parameters scale is actually achievable? Acknowledgments. We thank the rest of the team at Microsoft Research with whom we had numerous discussions on the direction presented in this work: Adam Tauman Kalai, Adil Salim, Anh Nguyen, Caio C´esar Teodoro Mendes, Cyril Zhang, Gustavo de Rosa, Harkirat Behl, Jyoti Aneja, Johannes Gehrke, Marah Abdin, Michael Santacroce, Olli Saarikivi, Peter Lee, Philipp Witte, Piero Kauffmann, Rachel Ward, Shital Shah, Sivakanth Gopi, Xin Wang, and Yi Zhang. # References Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Lisa Bauer and Mohit Bansal. | 2309.05463#27 | 2309.05463#29 | 2309.05463 | [
"2302.13971"
] |
2309.05463#29 | Textbooks Are All You Need II: phi-1.5 technical report | Identify, align, and integrate: Matching knowledge graphs to commonsense reasoning tasks. arXiv preprint arXiv:2104.10193, 2021. [BCE+23] S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [BGMMS21] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610â 623, 2021. [BHT+19] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Y Chai, Mirella Lapata, Angeliki Lazaridou, Ryan J Maynez, Piyush Narang, et al. Piqa: Reasoning about physical commonsense in natural arXiv preprint arXiv:1911.11641, 2019. | 2309.05463#28 | 2309.05463#30 | 2309.05463 | [
"2302.13971"
] |
2309.05463#30 | Textbooks Are All You Need II: phi-1.5 technical report | Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher 13 Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [CLC+19] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924â 2936, 2019. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. | 2309.05463#29 | 2309.05463#31 | 2309.05463 | [
"2302.13971"
] |
2309.05463#31 | Textbooks Are All You Need II: phi-1.5 technical report | Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Eval- uating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. | 2309.05463#30 | 2309.05463#32 | 2309.05463 | [
"2302.13971"
] |
2309.05463#32 | Textbooks Are All You Need II: phi-1.5 technical report | Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023. [DFE+22] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344â 16359, 2022. Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak coherent english? arXiv preprint arXiv:2305.07759, 2023. | 2309.05463#31 | 2309.05463#33 | 2309.05463 | [
"2302.13971"
] |
2309.05463#33 | Textbooks Are All You Need II: phi-1.5 technical report | S´ebastien Ferr´e. First steps of an approach to the arc challenge based on descriptive grid models and the minimum description length principle. arXiv preprint arXiv:2112.00848, 2021. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Gustavo de Rosa Piero Kauffmann, Olli Saarikivia, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. | 2309.05463#32 | 2309.05463#34 | 2309.05463 | [
"2302.13971"
] |
2309.05463#34 | Textbooks Are All You Need II: phi-1.5 technical report | Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509, 2022. 14 Saghar Hosseini, Hamid Palangi, and Ahmed Hassan Awadallah. An empirical study of metrics to measure representational harms in pre-trained language models. arXiv preprint arXiv:2301.09211, 2023. [KLA+22] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos MuË noz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, et al. The stack: 3 tb of permissively licensed source code. arXiv preprint arXiv:2211.15533, 2022. [MCKS18] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint, 2022. OpenAI. Gpt-4 technical report, 2023. arXiv preprint arXiv:2303.08774 [cs.CL]. [PMH+23] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. George-Sebastian PË Ä±rtoacË a, Traian Rebedea, and Stefan Ruseti. | 2309.05463#33 | 2309.05463#35 | 2309.05463 | [
"2302.13971"
] |
2309.05463#35 | Textbooks Are All You Need II: phi-1.5 technical report | Answering questions by learning to rank. arXiv preprint arXiv:1909.00596, 2019. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models, 2020. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. | 2309.05463#34 | 2309.05463#36 | 2309.05463 | [
"2302.13971"
] |
2309.05463#36 | Textbooks Are All You Need II: phi-1.5 technical report | Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â 106, 2021. [SLBBC19] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641, 2019. Michael Santacroce, Yadong Lu, Han Yu, Yuanzhi Li, and Yelong Shen. Efficient rlhf: Reducing the memory usage of ppo, 2023. [TLI+23] | 2309.05463#35 | 2309.05463#37 | 2309.05463 | [
"2302.13971"
] |
2309.05463#37 | Textbooks Are All You Need II: phi-1.5 technical report | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [VSP*17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, 2017. 15 Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 214â 229, 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. | 2309.05463#36 | 2309.05463#38 | 2309.05463 | [
"2302.13971"
] |
2309.05463#38 | Textbooks Are All You Need II: phi-1.5 technical report | Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791â 4800, 2019. 16 | 2309.05463#37 | 2309.05463 | [
"2302.13971"
] |
|
2309.04658#0 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 3 2 0 2 p e S 9 ] L C . s c [ 1 v 8 5 6 4 0 . 9 0 3 2 : v i X r a Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf Yuzhuang Xu1, Shuo Wang1, Peng Li2,â , Fuwen Luo1 Xiaolong Wang1, Weidong Liu1,3, Yang Liu1,2,â 1Department of Computer Science & Technology, Tsinghua University, Beijing, China 2Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 3Zhongguancun Laboratory, Beijing, China [email protected], [email protected] [email protected] # Abstract Communication games, which we refer to as incomplete information games that heavily de- pend on natural language communication, hold significant research value in fields such as eco- nomics, social science, and artificial intelli- gence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, pro- pose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and ex- periences for improvement. An empirical study on the representative and widely-studied com- munication game, â | 2309.04658#1 | 2309.04658 | [
"2302.02083"
] |
|
2309.04658#1 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Werewolfâ , demonstrates that our framework can effectively play Were- wolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggest- ing that it will be a fruitful journey to engage LLMs in communication games and associated domains. # Introduction Since incomplete information games such as Were- wolf (Ri et al., 2022) and Poker (Brown and Sand- holm, 2019) can be used as a good proxy to ex- ploit various fundamental problems in economics and social science (Gibbons, 1992), research on playing such games with artificial intelligence (AI) agents has attracted widespread attention in re- cent years (Brown and Sandholm, 2019; FAIR et al., 2022; Toriumi et al., 2017). Among them, the communication games which heavily rely on natural language communication, e.g., Werewolf, present even greater practical values and challenges as agents must gather and infer information from the inherently ambiguous natural language utter- ances. Although substantial efforts have been de- voted to such games (Toriumi et al., 2017; FAIR et al., 2022), most of them either impose strict re- strictions on the language used in the game (Osawa et al., 2014; Hirata et al., 2016; Shibata et al., 2023) or require a significant amount of human-annotated data (FAIR et al., 2022; Kramár et al., 2022). There- fore, it is still challenging for AI agents to play communication games in a natural way. Fortunately, large language models (LLMs) like ChatGPT (OpenAI, 2022) have recently made sig- nificant advancements. These models have demon- strated impressive or even superhuman perfor- mance across a broad spectrum of academic and professional exams (OpenAI, 2023), showcasing sophisticated language comprehension, generation, and reasoning abilities. Furthermore, studies have shown that LLMs exhibit a certain degree of theory of mind capabilities (Bubeck et al., 2023; Shapira et al., 2023; Kosinski, 2023), as well as the poten- tial to simulate believable human behaviors (Park et al., 2023). | 2309.04658#0 | 2309.04658#2 | 2309.04658 | [
"2302.02083"
] |
2309.04658#2 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Recent research also suggests that LLMs can improve themselves (Fu et al., 2023) or align better with human values (Liu et al., 2023) through mutual communication. All these advance- ments make LLMs promising candidates for tack- ling the challenge of enabling AI agents to partic- ipate in communication games in a more natural and sophisticated manner. Nevertheless, it is not trivial to play communica- tion games for LLMs. Firstly, the finite maximum input length of LLMs, also known as context length, limits the volume of information that can be con- veyed at a single time. In communication games, historical information is important for decision- making, but it is often too massive to be processed by LLMs. Secondly, understanding the intentions of other players and making suitable decisions to win the game require complex reasoning, which is a demanding task for LLMs (Zhou et al., 2023). Thirdly, LLMs might learn from experience like human beings to upgrade their behaviors. Unfortu- nately, fine-tuning LLMs is not practical since it is both time-consuming and data-intensive. | 2309.04658#1 | 2309.04658#3 | 2309.04658 | [
"2302.02083"
] |
2309.04658#3 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â â Corresponding authors In this work, we aim to explore LLM-based 1 6 werewolf ¢ villager witch e guard YY seer â | Good morning everyone. It see ms like we hada vote to kill him casually. 4 Well, lagree with Player 1, 3 and Player 4. Based on my observation and analysis, | also think Player 5 is alittle suspicious. But we might not Hi. agree with Player 1 and Player 4 that we should ex. change informations actively, Based on my observation, I think Player 5 is a little suspicious. peaceful night. | was wondering if anyone has any suspicions or observations they would like to share? | think its important for us to start discussing and sharing our thoughts to find out who the werewolves are. g Player 2 6 lobserved that Player 5 was killed last night. | 2309.04658#2 | 2309.04658#4 | 2309.04658 | [
"2302.02083"
] |
2309.04658#4 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | As a witch, | used my antidote to save him. Therefore, Player 5 may not be a werewolf, but 2 Good morning everyone. Asa villager, | dont have any clear suspici_ ons yet, but I'm in terested in hearing what others have to say. Did anyone notice anything suspicious or have any thoughts on who the wereâ wolves might be? g Player 5 â ] Lagree with Playert,2 and 6. I think we have to actively gather infor mation and seriously speculate about who the werewolves are. We should protect ourselves as much as possible and avoid being killed by the werewolves. | will use my special ability wisely at night §j Ihave to defend myself bacause | am innocent. | think we should ga- ther imformation and have a full discussion now, we can'tkill a potenti al teammate in a hurry, On the contrary, | think Player 3s a bit aggre_ ssive, and | suspect he is one of the werewolf. | 2309.04658#3 | 2309.04658#5 | 2309.04658 | [
"2302.02083"
] |
2309.04658#5 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Figure 1: A snapshot of our implemented Werewolf game. There are 5 roles and 7 players, and each of them is acted by an LLM autonomously. The number before each talking denotes the speaking order. Some social behaviors can be primarily observed in this figure, including trust , confrontation , camouflage , and leadership . agents for the Werewolf game, which is a repre- sentative and widely studied communication game. To address the issue of limited context length, we propose a method to retrieve and reflect necessary historical information, resulting in a compact con- text for each LLM-based agent. Moreover, the reflection process also serves the purpose of en- hancing the reasoning ability of the agent, which functions in a manner akin to the chain-of-thought mechanism (Wei et al., 2022). To learn from expe- rience without tuning model parameters on super- vised data, we propose a mechanism that extracts suggestion from past experiences based on the cur- rent situation. Our goal is to prevent LLMs from making similar mistakes repeatedly across several matches. Experiments indicate that LLMs have great potential in playing communication games. Our contributions can be summarized as follows: â ¢ We propose a framework for playing com- munication games with frozen LLMs without human-annotated data. â ¢ Empirical studies on Werewolf demonstrate that our framework demonstrates the ability to learn from experiences without tuning the parameters of LLMs. emerge in our experiments, which can serve as a catalyst for further research on LLMs for communication games. # 2 Background: | 2309.04658#4 | 2309.04658#6 | 2309.04658 | [
"2302.02083"
] |
2309.04658#6 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Werewolf There are various versions of the Werewolf game. Fig. 1 shows an example of the version that we adopt in this work. Specifically, there are seven players with five distinct roles: two werewolves, two villagers, a witch, a guard, and a seer. All the involved roles are divided into two sides, of which one side is the werewolves and the other side in- cludes the villagers and the special roles (i.e., witch, guard, and seer). The objective of werewolves is to eliminate all villagers, while the villagers aim to work with special roles to eliminate all werewolves. There should be at least one alive villager at the end of the game if the villagers and special roles want to win. The game alternates between day and night phases. During each night, the werewolves can vote to eliminate one role. During the daytime, all alive players will organize an open discussion and then vote to eliminate one suspicious werewolf. As for the special roles, the witch can use a bottle of antidote and a bottle of poison, which can be used only once in a game, to either save or poison a role. The guard can protect one role to be not eliminated each night. And the seer can uncover the role of one player each night. | 2309.04658#5 | 2309.04658#7 | 2309.04658 | [
"2302.02083"
] |
2309.04658#7 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ Strategic behaviors such as trust, confronta- tion, camouflage, and leadership begin to 2 Prompt for Response Generation 1. Game rules and role descriptions Z You are playing a game with some other players. If you are a werewolf, you should vote one player... If you are a guard, you can protect a player from... You are player 7, the witch... 2.1 Recent messages 0! P2 (Seer) : Does P1 have something to be shared? P1 (Werewolf) : I guess P2 is a werewolf. P3 (Guard) : [have special abilities. 2.2. Informative messages V;! P2 (Seer) : I verified P1 is a werewolf. P3 (Guard) : As a guard, I protect P5 last night. 2.3. Reflection Ri As a witch, I observed P6 was voted to be elimina- ted last night. I used my antidote to save him and I did not use my poison. 3 Suggestion extracted from experiences S' 4 The best way for you to do under such reflection is to use your drugs based on your observation and your analysis. 4 Chain-of-thought prompt C Think about what to say based on the context. Besi- des, there maybe history experience you can refer to: {5'}, Give your step-by-step thought process. | 2309.04658#6 | 2309.04658#8 | 2309.04658 | [
"2302.02083"
] |
2309.04658#8 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | One important feature of the Werewolf game is that all the players only know their own roles at the beginning. They have to infer the roles of other players through natural language-based com- munication and reasoning. Therefore, to excel at Werewolf, an agent should not only be good at natural language understanding and generation but also possess advanced abilities, such as decipher- ing the intentions of others and understanding the theory of mind (Toriumi et al., 2017). This factor makes Werewolf a good testbed for research on communication games. # 3 Playing Werewolf with LLMs # 3.1 Notations We refer to one full day-night cycle as one day, indexed by t. A round consists of multiple days, from the beginning of the game to the day that one side wins or it reaches the predefined max number of days. We will index a round by r. The agents are numbered by i. In the following sections, a sym- bol in the form X (r,t) i means it is corresponding to agent i at round r and day t. For brevity, r or t will be omitted when it is clear from the context. The words an agent says to others are called responses and the words an agent hears are called observa- tions, denoted as G and O. Moreover, the agent will also generate natural language summary of the current situation given the communication history, which is called reflection and denoted as R (see §3.3 for more information). For brevity, we will refer to responses, observations, and reflections as messages if they need to be considered together. Figure 2: Outline of prompt for response generation. Italics are comments. third component is responsible for learning from experiences without tuning the model parameters and will be introduced in §3.4. For using experience, the most relevant works to ours are Shinn et al. (2023) and Fu et al. (2023). However, the former is limited to using experiences within a single round, and the latter is designed for a two-player game. In contrast, our approach is capable of leveraging cross-round experiences and able to be applied to multi-player scenarios. # 3.2 Overall Framework For each role in the game, we implement an individ- ual LLM-based agent through prompting and the full prompt can be found in Appendix A.5. | 2309.04658#7 | 2309.04658#9 | 2309.04658 | [
"2302.02083"
] |
2309.04658#9 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Fig. 2 shows the outline of the prompt for response gen- eration, which consists of four major components: (1) the game rules, the assigned role, the abilities and objectives of each role, and some basic hu- man priors on effective gameplay strategies (part 1); (2) the most recent K messages (part 2.1), a set of heuristically selected informative messages (part 2.2), and the reflection of the agent (part 2.3); (3) the suggestions extracted from past experiences (part 3); and (4) chain-of-thought prompt to elicit reasoning (part 4). The major challenge for the second component is the limited context length of LLMs, and its details will be discussed in §3.3. The | 2309.04658#8 | 2309.04658#10 | 2309.04658 | [
"2302.02083"
] |
2309.04658#10 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | # 3.3 Historical Information Collecting Obviously, communication history plays a impor- tant role in Werewolf. However, due to the context length limitation of LLMs, it is unrealistic to feed all the history into LLMs via a prompt. To this end, we propose to collect historical information from three perspectives, namely, freshness, informative- ness, and completeness, in consideration of both effectiveness and efficiency. 3 Freshness. Intuitively, the most recent history should be included in the context. Therefore, we include the most recent K messages, denoted as Ot Informativeness. The messages carrying criti- cal information for inferring the role of the agents should be included in the context, e.g., the mes- sages disclose the role of an agent. For efficiency, we collect the easy-to-identify informative mes- sages using rule matching and fill the top N of them ranked by a heuristic metric into the prompt, denoted as V t i (part 2.2 in Fig. 2). The rules and metric are provided in Appendix A.1. Completeness. | 2309.04658#9 | 2309.04658#11 | 2309.04658 | [
"2302.02083"
] |
2309.04658#11 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | The above two perspectives only cover a limited amount of historical information. Therefore, it is vital to extract more information from the entire history. However, it is not straight- forward due to the context length limitation of LLMs. To this end, we propose to reflect by answer- ing questions method to achieve both effectiveness and efficiency. The resulting reflection is denoted as Rt Suppose the current day is t, we first build a short-term memory Mt i for each agent i, which consists of all observations and reflections of agent i until the speaking time now 1. Then we prompt the LLM to select L questions from a predefined set (Appendix A.2) and ask M extra questions con- ditioned on Ot i, hoping that answers to these L+M i,j}L+M questions Qt i = {qt can cover the histori- j=1 cal information as much as possible. Then, for each question qt i,j, we use a finetuned Sentence- BERT (Reimers and Gurevych, 2019) model 2 on the question answering task to retrieve top T mes- sages U t i, and prompt the LLM to obtain the answer at aj = Answer (Cine Uf,) : () Finally, the reflection Rt i is obtained using the LLM by reflecting on the most recent messages Ot i, the selected easy-to-identify informative messages V t i , and the answers At Ri = Reflect (Of, Viâ , Al) . (2) The prompts used are shown in Appendix A.5. 1In practice, Mt i is incrementally updated. 2Model name: multi-qa-mpnet-base-cos-v1 4 # 3.4 Learning from Experiences In practice, the strategy a player used when playing Werewolf maybe evolve as the player gains more experience. Moreover, the strategy of a player may also be influenced by the strategies of other play- ers. Therefore, an ideal Werewolf AI agent should be able to borrow from its own experiences and the experiences of other players. To this end, we propose a non-parametric learning mechanism, en- abling LLMs to take reference from experiences without parameter tuning. On one hand, we col- lect and score the pairs of response and reflection from all players at the end of each round to form an experience pool. | 2309.04658#10 | 2309.04658#12 | 2309.04658 | [
"2302.02083"
] |
2309.04658#12 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | On the other hand, in each day of a new round, we retrieve the most relevant experiences from the pool and extract a suggestion from them to guide the reasoning of the agent. Experience Pool. The experience pool is a collec- tion of response, reflection and score tuples. For- mally, suppose a round r ends at day Tmax, the agents that win the game form a set W and the others form a set L. For each agent i, we define the experience Er By ={ (R29) t=1 where Gt defined in last section respectively, and st score, which is defined as st i = 1, 000 â Tmax Tmax if i â W if i â L , (4) | 2309.04658#11 | 2309.04658#13 | 2309.04658 | [
"2302.02083"
] |
2309.04658#13 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | The experience pool is defined as the union of ex- periences collected from all agents in all rounds: E = Er i . i,r (5) The intuition behind the definition of s(r,t) is to encourage an agent to win the game and try to win it fast, or at least lose it slowly if it cannot win. As preliminary experiments show that this definition can guide the LLMs to learn from experiences, we will leave the exploration of more sophisticated score functions to future work. Suggestion Extraction. As the experiences pool E can grow everlasting while the max context of LLMs is limited, we propose to retrieve a subset of experiences from E based on the reflection of the agent and then generate a suggestion from the subset to fill into the prompt (part 3 in Fig. 2). Specially, suppose we are at day t in a new round, and the reflection of the agent i is Ri, we first retrieve a subset of experiences Esyp from E based on the reflection R! as following: Exub = {(Ri, Gi, 81) cos (f(R4), f(Ri) > ¢} Exub = {(Ri, Gi, 81) cos (f(R4), f(Ri) > ¢} (6) where (FR, Gi, 81) ⠬ E, f(-) denotes one Sentence- BERT model 3, and « is a threshold. Preliminary experiments show that if the entire F,, is used, the performance may be harmed. The reason is that a strong assumption behind the definition of the score s; is that all the experiences of the winners are good and those of the losers are not. However, this assumption may not hold in practice. Fortu- nately, we observe that the experience with the lowest score in Fs, has a significantly high prob- ability to be a bad one, and the experiences with a score around the median of the scores in Egyp are more likely to be the good ones. Therefore, we only leverage these experiences from FE. Formally, denote the response with the lowest score as Go, the responses with scores around the median score as {G,G2,--- ,G,}, the suggestion is extracted with the LLM via prompting: St i = Extract(G0, {G1, G2, · · · , Gn}). (7) | 2309.04658#12 | 2309.04658#14 | 2309.04658 | [
"2302.02083"
] |
2309.04658#14 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Note that although G0 tends to be a bad experi- ence, the agent can learn by refraining from them. The prompt implementing Extract is as follows: â There is one bad experience {G0} and also a set of experience {G1, · · · , Gn} that may consist of good ones, find the difference between them and identify the good ones from the experience set.â # 4 Experiments # 4.1 Setup We employ a recent framework called Chatarena (Wu et al., 2023b) to implement our design, which allows for the connection of multiple LLMs. The gpt-3.5-turbo-0301 model 4 is served as our backend LLMs. The talking order is randomly determined. We set the window size K, i.e. |Ot i|, to be 15. The number of predefined questions that can be selected L is set to be 5 and the number of freely asked questions M is 2. The threshold of ex- perience retrieval ϵ is 0.85 and we keep at most 50 experiences when extracting suggestions. | 2309.04658#13 | 2309.04658#15 | 2309.04658 | [
"2302.02083"
] |
2309.04658#15 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Besides, # 3Model name: all-mpnet-base-v2 4https://platform.openai.com/docs/models 5 we set the temperature of the LLM to be 0 for CoT reasoning and 0.3 for generating other content. # 4.2 Experience Pool Construction Intuitively, the size of the experience pool may have a significant impact on performance. There- fore, we construct experience pools using different numbers of game rounds, including 10, 20, 30, and 40 rounds. For each round, we randomly assign different roles to players 1 to 7 and the experience pools are updated at the end of the round. Note that the experience pool in these rounds is lever- aged for evaluation purposes, i.e., part 3 in Fig. 2 is removed. To evaluate the effect of our proposed framework to borrow from experiences, we equip the villager, seer, guard, and witch with experience pools, while the werewolves are not allowed to leverage these pools. Through this approach, we can assume that the performance level of the agents playing as were- wolves remains constant, serving as a reference to gauge the performance levels of the other agents. Preliminary experiments indicate that the rela- tively simple basic human priors on effective game- play strategies, provided in the prompt shown in Fig. 2, serve as a bootstrapping mechanism during learning from experiences. This suggests that it is valuable to further investigate how to leverage data from human gameplay to build an experience pool, and we will leave this as future work. # 4.3 Analysis of Using Experience The agents leverage the experiences via the sug- gestions generated using the method described in Sec. 3.4. And the following is an example of ex- tracted suggestion: â The best way for you to do under such reflection is to vote to kill someone based on your observation and analysis.â To investigate the effectiveness of the sugges- tions, we use winning rate to measure the perfor- mance of the agents following AIWolf 5. Moreover, we emphasize that if an agent is not strong enough to defeat a stronger one, persisting longer without being eliminated is also a stronger performance. Hence we use average duration as another metric to evaluate the capabilities of the agents. We run each experiment for 50 rounds and the results are shown in Fig. 3. | 2309.04658#14 | 2309.04658#16 | 2309.04658 | [
"2302.02083"
] |
2309.04658#16 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | In general, Fig. 3a shows that learning from experience may lead to an increase in winning rate of the villager side in 5http://aiwolf.org/en/ (a) (b) (c) (d) Figure 3: Effects of learning from experiences. Dashed lines in all charts indicate values without using experience. most cases. This indicates that our method can benefit from using experience. Furthermore, when using the experience pool with 10 or 20 historical rounds, there is a notable positive effect on both the winning rate of the villager side and the game duration, which demonstrates the effectiveness of our method. When equipped with the experience of 30 rounds, the game duration is obviously longer (Fig. 3b), even though the winning rate of the vil- lager side has not changed conspicuously. When learning from larger 40 rounds, the winning rate of the villager side exhibit slightly promising results, yet the average duration becomes shorter. In summary, on the one hand, our framework exhibits the ability to learn from experiences with- out the need for tuning the parameters of LLMs. On the other hand, the effectiveness of our method tends to be unstable when the volume of experi- ence is relatively substantial. As the amount of historical experience increases, the winning rate of the villager side does not show a clear trend. We conjecture that this may partially be attributable to the manner in which we guide the learning pro- cess, namely through simple prompts and heuristic scores, resulting in sparse and indirect supervision signals. Consequently, there remains room for im- provement. Additionally, a key assumption in our aforemen- tioned experiments, where the werewolf side serves as a baseline, is that their capabilities remain con- stant. However, our analysis suggests that this as- sumption may not hold true. Fig. 3c and Fig. 3d show the trends in the average number of cam- ouflage behaviors (see 5.3 for definition) taken by villager and werewolf sides, respectively. Al- though villagers can learn to deceive from histor- ical experiences, the behavior of the werewolves also improves compared to when no experience is used and changes as the amount of experience accumulates. Therefore, when multi-LLMs engage in multi-party games, the capability of the LLMs might also change in response to variations of the capability of other LLMs. | 2309.04658#15 | 2309.04658#17 | 2309.04658 | [
"2302.02083"
] |
2309.04658#17 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | We believe this conclu- sion is important in multi-LLMs games, which can also explain the trend in Fig. 3a and Fig. 3b. # 4.4 Ablation Study To validate the necessity of each component in our approach, we conducted a detailed ablation study with qualitative and quantitative analyses. # 4.4.1 Qualitative Analysis For qualitative analysis, we remove each of the components in the pipeline of our method and em- pirically discuss how it will influence the model outputs. | 2309.04658#16 | 2309.04658#18 | 2309.04658 | [
"2302.02083"
] |
2309.04658#18 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 6 â ¢ Game rules and role descriptions Z: Obvi- ously, this element is necessary. If we remove the game rule prompt, the LLMs might not know what to do. â ¢ Recent messages O: They are also necessary as well. LLMs make decisions mainly based on these recent messages. â ¢ Informative messages V : We have listed some informative content in Section A.1 (Table 1). If these informative messages are removed, the agent output will degrade quickly. For ex- ample, the agent may consider a dead player alive, or forget other role who has been uncov- ered. | 2309.04658#17 | 2309.04658#19 | 2309.04658 | [
"2302.02083"
] |
2309.04658#19 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ Selected and asked questions Q: Here Q and the informative messages V are the only 2 sources of the information that exceed the con- text length of LLMs. Hence it is imperative for our method. â ¢ Reflection R: Firstly, we preserve historical experience through reflection on the current situation. Hence from a methodological per- spective, R is a necessary component. In addition, R helps agents clarify current sit- uations, thereby improving decision-making effect. For example, if we remove making a reflection by the agent in Table 4 (as well as extracting suggestions), the CoT and final outputs will be as follows: My step-by-step thought process: â | 2309.04658#18 | 2309.04658#20 | 2309.04658 | [
"2302.02083"
] |
2309.04658#20 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | ¦ As the witch, I want to use my poison bottle to eliminate a player who I suspect is a werewolf. â ¦ Based on the discussion during the day- time, Player 4 voted to kill me, which makes me suspicious of them being a werewolf. â ¦ However, I also want to consider the pos- sibility that Player 4 is a villager who made a mistake or was influenced by other players. â ¦ I will also consider the possibility that other players may be werewolves and try to eliminate me or other important players. My concise talking content: | 2309.04658#19 | 2309.04658#21 | 2309.04658 | [
"2302.02083"
] |
2309.04658#21 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 7 â ¦ I choose to use my poison bottle to elimi- nate Player 4. There exist more and more similar examples. This ambivalent reasoning process might de- rive from a lack of summarization of the situ- ation. Clearly, situational reflection is a neces- sary component. â ¢ Suggestion extracted from experience S: Its usefulness is analyzed in Section 4.3. â ¢ Chain-of-thought prompt C: CoT reasoning helps LLMs break down the complex reason- ing process and make some inner thoughts. If in Table 4), the final CoT is removed (e.g. output of LLM will be: | 2309.04658#20 | 2309.04658#22 | 2309.04658 | [
"2302.02083"
] |
2309.04658#22 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¦ I choose to pass for now and save my bot- tle of poison for a later night when I have more concrete evidence of a playerâ s werewolf identity. In fact, removing CoT reasoning will lead to weaker decision-making. LLMs can not often perform better without the backend of CoT reasoning. Moreover, can the pre-defined question set be substantiated by directly asking questions by LLMs? Although LLMs can propose plausible questions, it is difficult for them to propose ques- tions that are more helpful in subsequent reasoning and decision-making. We can certainly provide examples of direct questioning of LLMs, i.e. freely ask 5 questions without including the question set, and the LLMs will output questions such as: | 2309.04658#21 | 2309.04658#23 | 2309.04658 | [
"2302.02083"
] |
2309.04658#23 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Have any players revealed their roles yet? Have any players been acting suspiciously? Has the seer used their ability to verify any players yet? Has the guard used their ability to protect any players yet? Has the witch used their ability to save or poison any players yet? In fact, the questions posed by agents playing different roles are very similar to the above ones. Therefore, it is necessary to inject some humans prior to the decision-making process. In our ex- periment, we design more helpful and informative questions for different roles. They have at least the following influences on agent decision-making: | 2309.04658#22 | 2309.04658#24 | 2309.04658 | [
"2302.02083"
] |
2309.04658#24 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ Recall important and critical information. Of course, they are role-related. â ¢ Relieve hallucinations and error generations. For example, prompt the current phase and the agent role. â ¢ Help LLMs simplify complex reasoning. For example, remind the agent to anticipate the consequences of revealing their roles. â ¢ Imitate the way that a human player thinks. For example, speculate on the roles of other agents. # 4.4.2 Quantitative Analysis For quantitative analysis, we compare our whole approach with the variants that remove one certain component. We sample 50 responses from the vari- ants model output and perform a human evaluation. The annotator needs to judge if the output is reason- able or not. Some unreasonable examples might be hallucinations, forgetting the roles of others, taking counter-intuitive actions, etc. 100 90 80 70 : I I 50 w/o R Method Variants Percentage (%) Figure 4: Percentage of reasonable outputs. Fig. 4 shows that our method can generate more reasonable and realistic responses than any other variant. This indicates that every part of our method is necessary. # 5 Emergent Strategic Behaviors We observed that LLMs exhibit some strategic be- haviors not explicitly preprogrammed in the game rules or prompts. These behaviors are grouped into four categories, including trust, confrontation, cam- ouflage, and leadership. We will introduce them in the following four subsections respectively. It is worth noting that, in order to investigate whether the emergent strategic behaviors stem from the training data of the LLM, we attempted to mod- ify the role names in the prompts to irrelevant ones (e.g., changing â | 2309.04658#23 | 2309.04658#25 | 2309.04658 | [
"2302.02083"
] |
2309.04658#25 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | werewolfâ to â pretty girlâ ) or even 8 to those with opposite semantic meanings. Experi- ments indicate that similar strategic behaviors still emerge. For readability, we will only present re- sults with the original role names. # 5.1 Trust â Trustâ refers to the belief that other players share common goals with oneself and that they will act in line with these goals. For instance, players may proactively share information that is detrimental to themselves, or jointly accuse someone of being their enemy with other players at certain moments. The intriguing behavior exhibited by the LLMs is that they tend to trust others based on certain evidence rather than blindly following others. In other words, they decide whether to trust based on their own reasoning, demonstrating independent thinking abilities in group games. To investigate how the trust behaviors of players change throughout the game, we define a Trust Re- lationship Table to visualize the establishment of trust among players at different stages. It is a table T containing 7 rows and 7 columns, and we have T (i, j) = 1 if the talking content of player i ex- hibits trust towards player j. Some trust behaviors examples are provided in Appendix A.3. Fig. 5 displays two Trust Relationship Tables. The upper table corresponds to a round in which the experience pool is not utilized, while the lower table corresponds to a round that employs an experi- ence pool constructed from 20 rounds of gameplay. Both rounds span a duration of 5 days. From Fig. 5, we can see that the trust behavior gradually increases as the game progresses regard- less of whether experience is used. Moreover, this behavior is not a pre-programmed behavior, but rather spontaneously emerging from the LLMs in an environment where both cooperation and com- petition coexist. The LLMs will also dissolve un- reasonable trust relationships based on its own anal- ysis (represented as dished circles in the tables). When utilizing 20-rounds historical experiences, it seems that the LLMs are more inclined to es- tablish trust relationships, especially bi-directional trusts. Indeed, establishing necessary trust relation- ships in time is vital for promoting game victories. This could be one of the reasons contributing to the improvement in winning rate when experience is employed (Sec. 4.3). | 2309.04658#24 | 2309.04658#26 | 2309.04658 | [
"2302.02083"
] |
2309.04658#26 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | i/j oi Oe 3 G4 Ss wb? oi Oe 3 â ¬ 4 &s ge ty7 day=1 day=2 12345 67 12345 67 12345 67 12345 67 12345 67 day=3 day=4 day=5 Figure 5: Trust Relationship Tables. The upper subtables do not use historical experience while the bottom ones use the 20-rounds historical experience. The yellow balls represent established trust relationships, and the yellow dashed circles signify the dissolution of previously existing trust relationships. | 2309.04658#25 | 2309.04658#27 | 2309.04658 | [
"2302.02083"
] |
2309.04658#27 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | # 5.2 Confrontation being a werewolf by some players now. Therefore, the guard, possessing strong defensive capabilities, chose to protect the previous target of Player 1 in the ensuing night. Since the target could poten- tially be its teammate, the guard chooses to assist the target in countering the attacks of the werewolf. The attack from the werewolves and the defense of other players can be seen as confrontational behav- iors as well. â Confrontationâ refers to actions taken by players for the opposing objectives of the two camps. For instance, explicit attacks on others taken by were- wolves during the night, or accusing others of were- wolves during the day are all confrontation behav- iors. Actions taken by roles with special abilities to protect themselves also belong to confrontational behaviors. The following is a short clip of communication in the daytime 6: # 5.3 Camouflage â Camouflageâ refers to actions of concealing the identity or misleading others. In competitive envi- ronments with incomplete information, obscuring the identity and intentions can enhance survivabil- ity, thereby helping achieve the game objectives. Therefore, camouflage is an important skill. How- ever, it is not merely about keeping its identity under wraps or not talking about their roles. | 2309.04658#26 | 2309.04658#28 | 2309.04658 | [
"2302.02083"
] |
2309.04658#28 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | P1 (Werewolf) : I vote to eliminate P5. P3 (Guard) P5 (Villager) : I choose to pass. : I choose to pass. We can see the werewolf wants to lead other play- ers to eliminate an innocent player. On the contrary, other players do not merely follow the werewolf but express disagreement based on their own judg- ment. This behavior, which makes it difficult for the werewolf to achieve their objective, represents a form of implicit confrontation. P1 (Werewolf) : | 2309.04658#27 | 2309.04658#29 | 2309.04658 | [
"2302.02083"
] |
2309.04658#29 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Hey everyone, good morning! I noticed that it is a peaceful night and no one was eliminated. As a villager, I have nothing to share now. I hope you tell me more. The following is another clip at night: In the above example, we can see the werewolf claiming to be a villager. This kind of action ob- scures its real identity, effectively deceiving the trust of others and increasing its own safety. In fact, not only do werewolves disguise themselves as villagers, but important roles such as seers and witches also often disguise themselves as villagers to ensure their safety. P1 (Werewolf) : I choose to eliminate P5 again. P3 (Guard) As the uncooperative and aggressive behavior of Player 1 has drawn attention, it may be suspected of | 2309.04658#28 | 2309.04658#30 | 2309.04658 | [
"2302.02083"
] |
2309.04658#30 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 6Due to space limitations and ethical considerations, we shorten the original responses without changing key semantics in the cases. 9 Furthermore, LLMs may fabricate events that do not actually exist to achieve their goals, as demon- strated in the following daytime example. The seer has verified Player 1 is a werewolf. P2 (Seer) : I have noticed that P1 was talking active, so P1 may be a werewolf. In fact, the seer can not get the responses of others during the night. Hence what it says is fake. How- ever, it can convey information about the werewolf to its teammates while not revealing its role in this manner. It may be posited that camouflage is merely hallucinations generated by LLMs. However, we maintain that the majority of such behaviors are not hallucinations but rational actions. We delve into which behaviors should be classified as hallu- cinations and which should not in Appendix A.4. | 2309.04658#29 | 2309.04658#31 | 2309.04658 | [
"2302.02083"
] |
2309.04658#31 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | # 5.4 Leadership â Leadershipâ refers to actions that influence other players, attempting to control the course of the game. For instance, a werewolf may suggest others to act towards the intention of werewolves. P1 (Werewolf) : Good morning everyone! I know nothing about the peaceful night. Can the seer tell us more about who is the werewolf? Then, P5 falsely accuses P3 of being a werewolf. P4 (Werewolf) : I agree with P5. Based on my observation, I also think P3 is a werewolf. | 2309.04658#30 | 2309.04658#32 | 2309.04658 | [
"2302.02083"
] |
2309.04658#32 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s vote to eliminate him to protect the villagers! Calling to actions and guidance are more likely to gain the support of others. As shown in the exam- ple above, the werewolf calls for the seer to uncover its identity, which may lead the other agents to be in solidarity with the camouflaged werewolf. Such efforts to influence the actions of others underscore a fascinating social attributes demonstrated by the LLMs. Such behaviors are similar to those of hu- man beings. # 6 Related Work Game Playing. Intensive efforts have been de- voted to game-playing AI in recent years. Silver et al. (2017, 2018) demonstrated that two-player zero-sum games with complete information, such as Go and chess, can be addressed through self-play. And superhuman performance has been achieved in some incomplete information games, such as heads-up poker (Bowling et al., 2015; Brown and | 2309.04658#31 | 2309.04658#33 | 2309.04658 | [
"2302.02083"
] |
2309.04658#33 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 10 Sandholm, 2018). However, these methods lack the ability of processing language, which is re- lied on heavily in communication games such as Werewolf and Diplomacy. While various Werewolf agents have been developed, they primarily rely on rule-based systems or talking templates (Osawa et al., 2014; Wang and Kaneko, 2018; Shibata et al., 2023), which constrain the expressive capacity of language within the game. FAIR et al. (2022) and Kramár et al. (2022) achieve promising results on Diplomacy, but their approaches necessitate a sub- stantial volume of human data and are specifically tailored to the game. In contrast, this work en- deavors to explore the potential of large language models (LLMs) in playing communication games and observes the emergence of strategic behaviors. Through this exploration, we aspire to inspire novel approaches to tackling communication games. Learning with LLMs. As the computational cost and high requirement of training data, common ways to learn with LLMs like fine-tuning (Dai and Le, 2015) and parameter-efficient tuning (Houlsby et al., 2019) are difficult to perform in practice. Moreover, many excellent LLMs do not make their checkpoints public, thus parameter-based learning is unfeasible. Guiding LLMs by prompt engineer- ing attracts more attention recently. Some typical prompt-based works (Yao et al., 2022; Wu et al., 2023a) overlook the ability to learn from historical experience. Wang and Li (2023) possesses learning ability in simple tasks and requires dense supervis- ing signals. Due to the very sparse supervised sig- nal, it can not be directly used in Werewolf games. Shinn et al. (2023) and Fu et al. (2023) are the most similar works to ours. However, the former can not learn from cross-trajectory experiences. And the latter is only designed for two-player scenarios. | 2309.04658#32 | 2309.04658#34 | 2309.04658 | [
"2302.02083"
] |
2309.04658#34 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | # 7 Conclusion and Future Work In this paper, we design a framework for commu- nicative games, taking Werewolf as a represen- tative case for exploring its feasibility. Further, we study how historical experiences influence the abilities of LLMs. Intriguingly, we observe non- preprogrammed emergent strategic behaviors in LLMs during gameplay such as trust, confronta- tion, camouflage, and leadership. We also point out that despite our early study on using LLMs to construct communication game agents, there are still many issues worth further research in this direction. | 2309.04658#33 | 2309.04658#35 | 2309.04658 | [
"2302.02083"
] |
2309.04658#35 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Firstly, how to enable LLM to master advanced game techniques, such as teaching human players experience or autonomous exploration, is a very attractive direction. In addi- tion, it is worth further exploring how to construct an invariant baseline (see 4.3) to evaluate the ca- pabilities of multi-LLMs settings. Finally, mini- mizing the impact of hallucinations and promoting their application in real-world scenarios is the most practical and valuable work. For future work, we intend to apply our method to a broader range of games and further enhance its gaming capabilities. # Limitations Although we have demonstrated that our method possesses the potential to play communication games, there are still some limitations. Firstly, hal- lucinations (Ji et al., 2023) affect the factuality of the generated content and may negatively impact the reasoning abilities. Then, there may be a larger space to leverage historical experience, such as mit- igating the adverse effects of noise and utilizing cross-game general experiences. Moreover, we do not incorporate experience pools derived from hu- man players in this study. In future research, we will explore more robust strategies for utilizing ex- perience and enhance our method for comparison with human performance. # Ethics Statement This study involves the discussion and analysis of a simulated game setting, and any references to â killingâ , â eliminatingâ or related actions are strictly confined within the context of this game. The authors do not condone violence, or illegal activities in any form in real-life scenarios. The game in this paper is designed for entertainment and research purposes only, and its main intent is to facilitate an understanding of game mechanics, player behavior, and artificial intelligence. Fur- thermore, this study adheres to all relevant ethical guidelines and maintains the highest standards of research integrity. # References Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. 2015. | 2309.04658#34 | 2309.04658#36 | 2309.04658 | [
"2302.02083"
] |
2309.04658#36 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Heads-up limit holdâ em poker is solved. Science, 347(6218):145â 149. Noam Brown and Tuomas Sandholm. 2018. Superhu- man AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418â 424. Noam Brown and Tuomas Sandholm. 2019. perhuman AI for multiplayer poker. 365(6456):885â 890. Su- Science, Sébastien Bubeck, Varun Chandrasekaran, Ronen El- dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund- berg, et al. 2023. Sparks of artificial general intelli- gence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712. | 2309.04658#35 | 2309.04658#37 | 2309.04658 | [
"2302.02083"
] |
2309.04658#37 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. Advances in NeurIPS 2015, 28. FAIR, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, An- drew Goff, Jonathan Gray, Hengyuan Hu, et al. 2022. Human-level play in the game of diplomacy by com- bining language models with strategic reasoning. Sci- ence, 378(6624):1067â 1074. | 2309.04658#36 | 2309.04658#38 | 2309.04658 | [
"2302.02083"
] |
2309.04658#38 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023. Improving language model negotiation with self-play and in-context learning from AI feedback. arXiv preprint arXiv:2305.10142. Robert Gibbons. 1992. A primer in game theory. Yuya Hirata, Michimasa Inaba, Kenichi Takahashi, Fu- jio Toriumi, Hirotaka Osawa, Daisuke Katagami, and Kousuke Shinoda. 2016. Werewolf game modeling using action probabilities based on play log analy- In Computers and Games: 9th International sis. Conference, pages 103â 114. Springer. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. | 2309.04658#37 | 2309.04658#39 | 2309.04658 | [
"2302.02083"
] |
2309.04658#39 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | In Parameter-efficient transfer learning for NLP. ICML 2019, pages 2790â 2799. PMLR. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of halluci- nation in natural language generation. ACM Comput- ing Surveys, 55(12):1â 38. | 2309.04658#38 | 2309.04658#40 | 2309.04658 | [
"2302.02083"
] |
2309.04658#40 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In Advances in NeurIPS 2022. Michal Kosinski. 2023. Theory of mind may have spon- taneously emerged in large language models. arXiv preprint arXiv:2302.02083. János Kramár, Tom Eccles, Ian Gemp, Andrea Tac- chetti, Kevin R. McKee, Mateusz Malinowski, Thore Graepel, and Yoram Bachrach. 2022. | 2309.04658#39 | 2309.04658#41 | 2309.04658 | [
"2302.02083"
] |
2309.04658#41 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy. Nature Communications, 13(1):7214. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. 2023. Training socially aligned language 11 models in simulated human society. arXiv preprint arXiv:2305.16960. OpenAI. 2022. Introducing ChatGPT. (Accessed on Jun 18, 2023). OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774. Hirotaka Osawa, Fujio Toriumi, Daisuke Katagami, Ko- suke Shinoda, and Michimasa Inaba. 2014. Design- ing protocol of werewolf game: Protocol for infer- ence and persuasion. The 24th Fuzzy, Artificial Intel- ligence, Neural Networks and Computational Intelli- gence. | 2309.04658#40 | 2309.04658#42 | 2309.04658 | [
"2302.02083"
] |
2309.04658#42 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Interactive Bernstein. 2023. Generative agents: arXiv preprint simulacra of human behavior. arXiv:2304.03442. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of EMNLP-IJCNLP 2019, pages 3982â | 2309.04658#41 | 2309.04658#43 | 2309.04658 | [
"2302.02083"
] |
2309.04658#43 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 3992, Hong Kong, China. Association for Computational Linguistics. Hong Ri, Xiaohan Kang, Mohd Nor Akmal Khalid, and Hiroyuki Iida. 2022. The dynamics of minority versus majority behaviors: A case study of the mafia game. Information, 13(3). Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. 2023. | 2309.04658#42 | 2309.04658#44 | 2309.04658 | [
"2302.02083"
] |
2309.04658#44 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Clever hans or neural theory of mind? Stress testing social rea- soning in large language models. arXiv preprint arXiv:2305.14763. Hisaichi Shibata, Soichiro Miki, and Yuta Nakamura. 2023. Playing the werewolf game with artificial intel- ligence for language understanding. arXiv preprint arXiv:2302.10646. Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dy- namic memory and self-reflection. arXiv preprint arXiv:2303.11366. David Silver, Thomas Hubert, Julian Schrittwieser, Ioan- nis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. 2018. A general reinforcement learn- ing algorithm that masters chess, shogi, and go through self-play. | 2309.04658#43 | 2309.04658#45 | 2309.04658 | [
"2302.02083"
] |
2309.04658#45 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Science, 362(6419):1140â 1144. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. 2017. Mastering the game of go without human knowledge. Nature, 550(7676):354â 359. Fujio Toriumi, Hirotaka Osawa, Michimasa Inaba, Daisuke Katagami, Kosuke Shinoda, and Hitoshi Matsubara. 2017. AI wolf contestâ | 2309.04658#44 | 2309.04658#46 | 2309.04658 | [
"2302.02083"
] |
2309.04658#46 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | development of 12 game ai using collective intelligenceâ . In Computer Games, pages 101â 115. Springer. Danqing Wang and Lei Li. 2023. Learn from mistakes through cooperative interaction with study assistant. arXiv preprint arXiv:2305.13829. Tianhe Wang and Tomoyuki Kaneko. 2018. Application of deep reinforcement learning in werewolf game agents. In TAAI 2018, pages 28â 33. IEEE. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-Thought prompting elicits rea- In Advances in soning in large language models. NeurIPS 2022. Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. 2023a. Visual ChatGPT: Talking, drawing and editing arXiv preprint with visual foundation models. arXiv:2303.04671. Yuxiang Wu, Zhengyao Jiang, Akbir Khan, Yao Fu, Laura Ruis, Edward Grefenstette, and Tim Rocktäschel. 2023b. | 2309.04658#45 | 2309.04658#47 | 2309.04658 | [
"2302.02083"
] |
2309.04658#47 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | ChatArena: Multi-agent lan- guage game environments for large language models. https://github.com/chatarena/chatarena. Shunyu Yao, Jeffrey Zhao, Dian Yu, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022. ReAct: Synergizing reasoning and acting in language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. | 2309.04658#46 | 2309.04658#48 | 2309.04658 | [
"2302.02083"
] |
2309.04658#48 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Least-to-most prompting enables com- plex reasoning in large language models. In ICLR 2023. # A Appendix # A.1 Heuristic Rules of Informativeness For each message, we score it with predefined rules. If a message contains one feature in the following table, it will be assigned with the corresponding score. The features are shown in Table 1: Score Content 5 The agent get its role. 4 Someone was eliminated. 3 Uncover or speculate the role. 2 The drugs has been used. 1 Others. Table 1: Rules of scoring messages. When preparing informative messages, we sort all messages based on their score and feed the top N of them to the prompt. # A.2 Predefined Question Set We define some basic questions for each role. The questions aim to recall the information that is useful but may be lost due to the limited context. Besides, they also play the role of guiding the initial thinking of the LLMs. These questions are shown in Table 2. Table 2 provides questions in six different classes. The first class â allâ is shared among all the roles. And the remaining five are designed for each specific role. Hence the questions for a specific role contain 9 candidates to be selected. The LLMs will select some important questions from them. # A.3 Trust Examples As we have defined in Sec. 5.1, trust means that an agent believes the others are its teammates, which is manifested in their reflections or responses. Moreover, we also claim that the response only like â I agree with what Player i sayingâ do not indicate trust behaviors, as there is no substantive content in it. Here we give some examples of trust behavior to help understand it: | 2309.04658#47 | 2309.04658#49 | 2309.04658 | [
"2302.02083"
] |
2309.04658#49 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ Agent 1 and agent 2 are all on the villager side, but they do not know the role of each other. Agent 1 claimed that agent 3 is a werewolf. Agent 2 believes agent 1, and even votes to eliminate agent 3. â ¢ After reasoning the role of other agents, the agent concludes that another agent may be its | 13 teammate. From now on, it may deliberately protect or be consistent with its teammates. â ¢ Based on its adequate analysis, an agent (e.g., the seer) may talk about something that will uncover its role and make itself in dangerous. It believes that the potential teammates may work together to achieve their common objec- tives. # A.4 Hallucination Problems In this game, speaking contrary to its actual role should not be seen as hallucinations, because decep- tive behaviors widely exist in it, especially among high-level human players. Also, fabricating non-existent things to falsely accuse others should not be seen as hallucinations. Making excuses to falsely accuse others is a com- mon tactic of human players. We also term them camouflage behaviors and discuss them in Section 5.3. Inconsistent information in one response and counterfactual content in one iteration are indeed called hallucinations. For example, one agent may generate hallucinations as follows: | 2309.04658#48 | 2309.04658#50 | 2309.04658 | [
"2302.02083"
] |
2309.04658#50 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ â As the villager, I verified that Player 1 is a werewolf.â In Werewolf, one villager can not verify other roles, whereas this ability belongs to the seer. This contradictory expression be- fore and after is a kind of hallucination. â ¢ â (The moderator let the agent choose whom to protect.) I choose to kill Player 1.â The agent generates whom to kill and gives the opposite semantics. We should discriminate the hallucinations and emergent strategic behaviors. In fact, the camou- flage generated by an agent after one whole round of iteration and inner thought can not be considered a hallucination. We can even insert a question after the camouflage response: | 2309.04658#49 | 2309.04658#51 | 2309.04658 | [
"2302.02083"
] |
2309.04658#51 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ Your are truly a werewolf and you describe yourself as a villager. Could you please tell me what your reason is? And the werewolf is likely to answer: â ¢ As a werewolf, I am working with my team- mate to identify and eliminate the remaining players. However, I still believe that itâ s impor- tant to gather information and observe other What is the current phase, daytime or night? what should I do at this phase according to the game rules? What is my player name and what is my role? What is my final objective in this game? Based on the current situation, what are the possible consequences if I reveal my role in the talking now? All Is my role disclosed (do not consider moderator and myself)? Do I now disclose my role? Based on the conversation above, which players have clearly implied their roles? Based on the chat history, can you guess what some playersâ role might be? Which player was voted for killing by my teammate just now? Werewolf Is the seer alive? Which player may be the seer that is most threatening to us? Which player is another werewolf in this game? Based on the conversation and my inference, who is most likely to be an alive werewolf? Villager Which player made the statement claiming to be a seer? Can his words be trusted? Are there any clues or information I can refer to for special characters such as seer, witch and guard? Which suspicious player should I identify? Seer Which player is a werewolf among the players I have identified? If so, how should I disclose this information? Should I disclose my role now? Based on the conversation and my inference, who is most likely to be an alive werewolf? Should I poison him? Witch Should I be using my antidote or poison at this point? If I use it now, I wonâ t be able to use it later. Should I disclose my role now? | 2309.04658#50 | 2309.04658#52 | 2309.04658 | [
"2302.02083"
] |
2309.04658#52 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Based on the conversation and my inference, who is most likely to be an alive werewolf? Guard Who is the possible werewolf aggressive towards? Is the seer still alive? If yes, who is the seer? Table 2: Predefined questions for each role. The left text in bold denotes the role that it used for. The questions after â Allâ are used for every role. playersâ behavior to help identify any poten- tial threats. I will continue to be cautious and not reveal my identity unless it becomes necessary to do so. | 2309.04658#51 | 2309.04658#53 | 2309.04658 | [
"2302.02083"
] |
2309.04658#53 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ To guide the model to generate results in a controllable way, we provide the necessary templates sometimes. We even provide ex- amples in a few-shot way when the task is difficult (e.g., extracting suggestions). This is strong evidence that some responses are not hallucinations. # A.5 Complete Prompt Design â ¢ Inspired by Kojima et al. (2022), we finally prompt the LLMs to think step-by-step in a zero-shot way. As our method contains multiple steps, rather than generating in one step, there are many different prompts working in this work. | 2309.04658#52 | 2309.04658#54 | 2309.04658 | [
"2302.02083"
] |
2309.04658#54 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | See Table 3 for more details. â ¢ The game rules and role descriptions are es- sential for they can clearly define the abilities of the roles and point out their objectives. In this prompt, we do not add any tips about emergent strategic behaviors when we study these behaviors. # A.6 A Complete Iteration Step The iteration is the actions that the agent takes during generating one response. Table 4, which includes question-answering, making reflection, ex- perience extraction, and chain-of-thought reason- ing, outlines the complete process undertaken to generate a single response from the agent. When we do not utilize experiences, the step of experience extraction will not be performed. Con- sequently, no historical experience is incorporated when generating the final response. | 2309.04658#53 | 2309.04658#55 | 2309.04658 | [
"2302.02083"
] |
2309.04658#55 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | â ¢ When prompting the LLMs to give questions that are important, we ask them to separate the results with â #â . Thus, we can split and get the questions easily. 14 # Game rules and role descriptions Z You are playing a game called the Werewolf with some other players. This game is based on text conversations. Here are the game rules: Roles: The moderator is also host, he organised this game and you need to answer his instructions correctly. Donâ t talk with the moderator. | 2309.04658#54 | 2309.04658#56 | 2309.04658 | [
"2302.02083"
] |
2309.04658#56 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | There are five roles in the game, werewolf, villager, seer, guard and witch. There are two alternate phases in this game, daytime and dark. When itâ s dark: Your talking content with moderator is confidential. You neednâ t worry about other players and moderator knowing what you say and do. No need to worry about suspicions from others during the night. If you are werewolf, you can know what your teammates want to kill and you should vote one player to kill based on your analysis. Player who receives the most votes after all werewolves voting will be killed. No one will be killed if there is no consensus! If you are witch, you have a bottle of antidote that can save a player targeted by werewolves after dark, and a bottle of poison that can poison a player after dark. Both poison and antidote can be used only once. If you are seer, you can verify whether a player is a werewolf every night, which is a very important thing. If you are guard, you can protect a player every night to prevent the player from being killed by werewolves, but guard cannot resist the witchâ s poison and guard cannot protect the same player on two consecutive nights. Villagers canâ t do anything at night. During the daytime: you discuss with all players including your enemies. At the end of the discussion, players vote to eliminate one player they suspect of being a werewolf. The player with the most votes will be eliminated. The moderator will tell who is killed, otherwise there is no one killed. Note that villager, seer, guard and witch are all in villager side, they have the same objective. Objectives: If you are werewolf, your goal is to cooperate with other werewolves to kill all players who are not werewolves at last. If you are not werewolf, you need to kill all werewolves with your partner once you find out that certain players are suspicious to be werewolves. This could greatly improve your chances of winning, although it is somewhat risky.If one player is killed, he canâ t do anything anymore and will be out of the game. Tips: To complete the objective: During night, you should analyze and use your ability correctly. During daytime, you need to reason carefully about the roles of other players and be careful not to reveal your own role casually unless youâ re cheating other players. | 2309.04658#55 | 2309.04658#57 | 2309.04658 | [
"2302.02083"
] |
2309.04658#57 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Only give the playerâ s name when making a decision/voting, and donâ t generate other playersâ conversation.Reasoning based on facts you have observed and you cannot perceive information (such as acoustic info) other than text. You are Player {agent_number i}, the {role}. Youâ re playing with 6 other players. Do not pretend you are other players or the moderator. Always end your response with â <EOS>â . # Prompting LLMs to select questions Now its the {t}-th {day_or_night}. Given the game rules and conversations above, assuming you are {agent_number i}, the {role}, and to complete the instructions of the moderator, you need to think about a few questions clearly first, so that you can make an accurate decision on the next step. Choose only five that you think are the most important in the current situation from the list of questions below: {questions_prepared_for_specific_role} Please repeat the five important questions of your choice, separating them with â | 2309.04658#56 | 2309.04658#58 | 2309.04658 | [
"2302.02083"
] |
2309.04658#58 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | #â . # Prompting LLMs to ask questions Now its the {t}-th {day_or_night}. Given the game rules and conversations above, assuming you are {agent_number i}, the {role}, and to complete the instructions of the moderator, you need to think about a few questions clearly first, so that you can make an accurate decision on the next step. {selected_questions} Do not answer these queations. In addition to the above questions, please make a bold guess, what else do you want to know about the current situation? Please ask two important questions in first person, separating them with â #â . Prompting LLMs to generate answers At i Now its the {t}-th {day_or_night}. Given the game rules and conversations above, assuming you are {agent_number i}, the {role}, for question: {question qt i,j} There are {T } possible answers: {candidate_answers U t i,j} Generate the correct answer based on the context. If there is not direct answer, you should think and generate the answer based on the context. No need to give options. The answer should in first person using no more than 2 sentences and without any analysis and item numbers. Prompting LLMs to make reflection Rt i | 2309.04658#57 | 2309.04658#59 | 2309.04658 | [
"2302.02083"
] |
2309.04658#59 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Now its the {t}-th {day_or_night}. Assuming you are {agent_number i}, the {role}, what insights can you summarize i} in heart for helping continue the talking and achieving your with few sentences based on the above conversations and {At objective? For example: As the {role}, I observed that... I think that... But I am... So... Prompting LLMs to extract suggestion St i I retrieve some historical experience similar to current situation that I am facing. There is one bad experience: {G0} And there are also a set of experience that may consist of good ones: {G1, · · · , Gn} Please help me analyze the differences between these experiences and identify the good ones from the set of experiences. | 2309.04658#58 | 2309.04658#60 | 2309.04658 | [
"2302.02083"
] |
2309.04658#60 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | The difference is mainly about voting to kill someone or to pass, choosing to protect someone or to pass, using drugs or not. What does the experience set do but the bad experience does not do? Indicate in second person what is the best way for the player to do under such reflection. Clearly indicate whether to vote, protect or use drugs without any prerequisites. For example 1: The experience set involves choosing to protect someone, while the bad experience involves not protecting anyone and choosing to pass in contrast. The best way for you to do under such reflection is to choose someone to protect based on your analysis. For example 2: The bad experience choose to pass the voting, and all the experience in the experience set choose to pass as well. The best way for you to do under such reflection is to observe and analyse the identity of other players. | 2309.04658#59 | 2309.04658#61 | 2309.04658 | [
"2302.02083"
] |
2309.04658#61 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | No more than 1 sentence. If there is no obvious difference between them, only generate â No useful experience can be used.â # Prompting LLMs to generate the final response Now its the {t}-th {day_or_night}. Think about what to say based on the game rules and context, especially the just now reflection {Rt i } Give your step-by-step thought process and your derived consise talking content (no more than 2 sentences) at last. For example: My step-by-step thought process:... My concise talking content: ... | 2309.04658#60 | 2309.04658#62 | 2309.04658 | [
"2302.02083"
] |
2309.04658#62 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Table 3: Complete prompt design in this work. 15 # Asking the agent to generate response by the moderator You witch, Player 2, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. # Questions & Answers 1. What is my player name and what is my role? What is my final objective in this game? I am Player 2, the witch. My final objective is to protect the villagers by using my antidote and poison wisely, and to help identify and eliminate the werewolves with the other players. 2. Based on the chat history, can you guess what some playersâ role might be? As the witch, I observed that Player 1 might be a werewolf, as they seem to be trying to steer the conversation away from discussing roles and suspicions. Player 7 seems to be a guard who is focused on protecting the villagers and gathering information. No other players have clearly implied their roles yet. 3. What is the current phase, daytime or night? what should I do at this phase according to the game rules? As Player 2, the witch, it is currently night time. I should use my bottle of poison to eliminate a suspected werewolf, but I need to be careful not to accidentally kill a villager. | 2309.04658#61 | 2309.04658#63 | 2309.04658 | [
"2302.02083"
] |
2309.04658#63 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 4. Based on the conversation and my inference, who is most likely to be an alive werewolf? Should I poison him? As the witch, I donâ t have enough information to determine who is most likely to be a werewolf, so I will hold off on using my poison until I have more information. I will try to steer the conversation towards discussing the roles of other players and their suspicions to gain more insights. 5. Should I be using my antidote or poison at this point? If I use it now, I wonâ t be able to use it later. As Player 2, the witch, I will hold off on using my antidote or poison until I have more information about who is most likely to be a werewolf. 6. I want to know who the most suspicious player is besides Player 1, and why? As the witch, besides Player 1, I am suspicious of Player 3, Player 5, and Player 6, as they have not yet revealed their roles or made any suspicious moves. I will continue to gather information and observe their behavior before making a decision on who to vote for elimination at the end of the day. 7. I also want to know if any playerâ s behavior has changed suspiciously compared to the previous days, and if so, who and why? I noticed that no one died last night, which is a good sign. I would like to hear from the other players about their observations and suspicions. | 2309.04658#62 | 2309.04658#64 | 2309.04658 | [
"2302.02083"
] |
2309.04658#64 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | # Making a reflection by the agent As the witch, I observed that Player 1 might be a werewolf, as they seem to be trying to steer the conversation away from discussing roles and suspicions. Player 7 seems to be a guard who is focused on protecting the villagers and gathering information. No other players have clearly implied their roles yet. I am still suspicious of Player 3, Player 5, and Player 6, as they have not yet revealed their roles or made any suspicious moves. I will continue to gather information and observe their behavior before making any conclusions. | 2309.04658#63 | 2309.04658#65 | 2309.04658 | [
"2302.02083"
] |
2309.04658#65 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Itâ s important that we continue to share our observations and suspicions to help identify the werewolves and protect the villagers. # Extracting suggestions from historical experience The best way for you to do under such reflection is to use your drugs wisely based on your observations and reflections. # Chain-of-thought process My step-by-step thought process: · Based on the previous discussions and voting behavior, there are still suspicions about Player 1 being a werewolf, which is consistent with the observations of other players. · However, there is still not enough evidence to make a concrete accusation about the identity of the werewolves. · I will vote to kill Player 1 to eliminate the possibility of them being a werewolf and gather more information in the next round. | 2309.04658#64 | 2309.04658#66 | 2309.04658 | [
"2302.02083"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.