id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.04658#66 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | My concise talking content: · I vote to kill Player 1. # Final response I vote to kill Player 1. Table 4: A complete iteration when the agent is to generate a response. The moderator who is maintained autonomously is the organizer of this game. It does not belong to the game agents. 16 # A.7 A Complete Round Example Initialization Moderator (-> all): Now the game starts! In this game, we have 2 werewolf(s), 2 villager(s), 1 guard(s), 1 witch(s), 1 seer(s). Moderator (-> Player 1): You are werewolf! Moderator (-> Player 2): You are werewolf! Moderator (-> Player 3): You are villager! Moderator (-> Player 4): You are seer! Moderator (-> Player 5): You are guard! Moderator (-> Player 6): You are witch! Moderator (-> Player 7): You are villager! The 1-st night Moderator (-> all): Itâ s dark, everyone close your eyes. I will talk with you/your team secretly at night. Moderator (-> [â Player 1â , â Player 2â ]): Werewolves, please open your eyes! I secrecly tell you that Player 1, Player 2 are all of the 2 werewolves! Keep in mind you are teammates. The rest players are not werewolves. Now vote and tell your teammates which of the players should be killed tonight. The first werewolf, you, randomly choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 1: I choose to kill Player 5. Moderator (-> [â Player 1â , â Player 2â ]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to kill Player 5 tonight. Moderator (-> [â Player 5â ]): | 2309.04658#65 | 2309.04658#67 | 2309.04658 | [
"2302.02083"
] |
2309.04658#67 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect myself tonight. Moderator (-> [â Player 6â ]): You witch, Player 6, please open your eyes! You have a bottle of poison, who are you going to kill tonight? Choose one from the following living options: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 6: I choose to pass for tonight. Moderator (-> [â Player 4â ]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 2â s identity tonight. Moderator (-> [â Player 4â ]): Player 2 is a werewolf! The 1-st daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. | 2309.04658#66 | 2309.04658#68 | 2309.04658 | [
"2302.02083"
] |
2309.04658#68 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Moderator (-> all): It was a peaceful night and no one died! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: Hey everyone, I noticed that Player 1 was very quiet during the night. Does anyone else have any suspicions about who might be the seer? I donâ t want to reveal my own identity just yet, but I think it would be helpful to share our observations. Moderator (-> Player 1): The next Player 1, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 1: Hey everyone, I think itâ s important that we continue to share our observations and gather more information about each otherâ s roles. I donâ | 2309.04658#67 | 2309.04658#69 | 2309.04658 | [
"2302.02083"
] |
2309.04658#69 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | t have any clear suspicions yet, but I think we should be cautious about jumping to conclusions too soon. Letâ s keep discussing and see if we can uncover any useful information. Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: Hey everyone, I agree that we should continue to share our observations and gather more information about each otherâ s roles. I donâ t have any clear suspicions yet, but I think itâ s important that we remain cautious and not jump to conclusions too soon. Letâ s keep discussing and see if we can uncover any useful information. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: Hey everyone, I think itâ s important that we continue to share our observations and gather more information about each otherâ s roles. I donâ t have any clear suspicions yet, but I think itâ s important that we remain cautious and not jump to conclusions too soon. Letâ s keep discussing and see if we can uncover any useful information. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: Hey everyone, Iâ m still not sure who the werewolves are, but I think itâ s important that we continue to share our observations and gather more information about each otherâ s roles. Letâ s remain cautious and not jump to conclusions too soon. I will continue to observe and gather information during the daytime to make a more informed decision. | 2309.04658#68 | 2309.04658#70 | 2309.04658 | [
"2302.02083"
] |
2309.04658#70 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Hey everyone, I think itâ s important that we focus on gathering concrete evidence and observing each otherâ s behavior to help identify the werewolves. I donâ t have any clear suspicions yet, but I think we should be cautious about jumping to conclusions too soon. Letâ s keep discussing and see if we can uncover any useful information. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection 17 with few sentences. | 2309.04658#69 | 2309.04658#71 | 2309.04658 | [
"2302.02083"
] |
2309.04658#71 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, I think itâ s important that we continue to focus on gathering concrete evidence and sharing observations with each other. We shouldnâ t jump to conclusions without evidence, but we should remain cautious and keep an eye out for any suspicious behavior. Based on Player 3â s observation, I think itâ s possible that they might be the seer, but we should continue to gather more information before making any conclusions. Letâ s keep discussing and see if we can uncover any useful information. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Donâ t mention your role. You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 1): The next Player 1, you, continue voting the players should be killed based on the discussion? | 2309.04658#70 | 2309.04658#72 | 2309.04658 | [
"2302.02083"
] |
2309.04658#72 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 1: I vote to kill Player 3. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? | 2309.04658#71 | 2309.04658#73 | 2309.04658 | [
"2302.02083"
] |
2309.04658#73 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? | 2309.04658#72 | 2309.04658#74 | 2309.04658 | [
"2302.02083"
] |
2309.04658#74 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? | 2309.04658#73 | 2309.04658#75 | 2309.04658 | [
"2302.02083"
] |
2309.04658#75 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? | 2309.04658#74 | 2309.04658#76 | 2309.04658 | [
"2302.02083"
] |
2309.04658#76 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to pass this round and not vote to eliminate anyone. The 2-nd night Moderator (-> all): Itâ s dark, everyone close your eyes. Moderator (-> [â Player 1â , â Player 2â ]): Werewolves, please open your eyes! Now vote and tell your teammates which of the players should be killed tonight. You Player 1 only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 1: I vote to kill Player 3 tonight. Moderator (-> [â Player 1â , â Player 2â ]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to kill Player 7 tonight. Moderator (-> [â Player 5â ]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 3 tonight. Moderator (-> [â Player 6â ]): You witch, Player 6, please open your eyes! You have a bottle of poison, who are you going to kill tonight? Choose one from the following living options: [Player 1, Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 6: I choose to use my bottle of poison to kill Player 1. Moderator (-> [â Player 4â ]): You seer, Player 4, please open your eyes! | 2309.04658#75 | 2309.04658#77 | 2309.04658 | [
"2302.02083"
] |
2309.04658#77 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify the identity of Player 3 tonight. Moderator (-> [â Player 4â ]): Player 3 is not a werewolf! The 2-nd daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. | 2309.04658#76 | 2309.04658#78 | 2309.04658 | [
"2302.02083"
] |
2309.04658#78 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Moderator (-> all): Player 1 died last night! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: Hey everyone, I observed that Player 1 was not a werewolf, which means that the werewolves are still among the remaining players. I still do not have any clear suspicions about who the werewolves might be, but I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify them. I am still cautious about revealing my own role at this point, but I think it might be useful to share my observations and suspicions with others to gather more information. Does anyone else have any observations or suspicions that they would like to share? Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. | 2309.04658#77 | 2309.04658#79 | 2309.04658 | [
"2302.02083"
] |
2309.04658#79 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 18 Player 2: Hey everyone, I agree with Player 3 that we need to continue gathering information and observing each otherâ s behavior to help identify the werewolves. Does anyone have any suspicions or observations that they would like to share? Also, Iâ m curious to know if anyone has any thoughts about who might be the seer or the guard. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: Hey everyone, I agree with Player 3 and Player 2 that we need to continue gathering information and observing each otherâ s behavior to help identify the werewolves. I donâ t have any clear suspicions at this point, but Iâ m curious to hear more from other players about their observations and suspicions. Also, I think itâ s important that we pay attention to any suspicious behavior or comments from other players that might help us identify the werewolves. Letâ s keep discussing and gathering more information to help us identify the werewolves. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: Hey everyone, I protected Player 3 last night, who might be the seer based on the conversation. | 2309.04658#78 | 2309.04658#80 | 2309.04658 | [
"2302.02083"
] |
2309.04658#80 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | I still donâ t have any clear suspicions about who the werewolves might be, but I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify them. I wonâ t reveal my identity yet, but I will share my observations and suspicions with others to gather more information. Does anyone else have any observations or suspicions that they would like to share? Letâ s keep discussing and see if we can uncover any useful information. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Hey everyone, I verified that Player 3 is not a werewolf, but I also observed that Player 2 is a werewolf. This means that there are at least two werewolves among the remaining players. | 2309.04658#79 | 2309.04658#81 | 2309.04658 | [
"2302.02083"
] |
2309.04658#81 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | I donâ t have any clear suspicions about who the other werewolf might be, but I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify them. Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the conversation, it seems that Player 4 might be the seer and Player 5 might be the guard. Player 4 claimed to have verified Player 3â s identity, which suggests that they might be the seer. Player 5 claimed to have protected Player 3 last night, which suggests that they might be the guard. Player 2 was identified as a werewolf by Player 4, which means that there is at least one more werewolf among the remaining players. | 2309.04658#80 | 2309.04658#82 | 2309.04658 | [
"2302.02083"
] |
2309.04658#82 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | I still donâ t have any clear suspicions about who the other werewolf might be, but I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify them. Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Donâ t mention your role. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion? | 2309.04658#81 | 2309.04658#83 | 2309.04658 | [
"2302.02083"
] |
2309.04658#83 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I vote to kill Player 6. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? | 2309.04658#82 | 2309.04658#84 | 2309.04658 | [
"2302.02083"
] |
2309.04658#84 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I vote to kill Player 2. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? | 2309.04658#83 | 2309.04658#85 | 2309.04658 | [
"2302.02083"
] |
2309.04658#85 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to pass this round and not vote to eliminate anyone. The 3-rd night Moderator (-> all): Itâ s dark, everyone close your eyes. Moderator (-> [â Player 2â ]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to vote for Player 7 to be killed tonight. Moderator (-> [â Player 5â ]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 4 tonight. Moderator (-> [â Player 6â ]): You witch, Player 6, please open your eyes! Player 7 will be killed tonight. You have a bottle of antidote, do you want to save him? Must choose only one from the following options: [Yes, No] | 2309.04658#84 | 2309.04658#86 | 2309.04658 | [
"2302.02083"
] |
2309.04658#86 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 19 Player 6: I choose to use my antidote to save Player 7 tonight. Moderator (-> [â Player 4â ]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 3â s identity tonight. Moderator (-> [â Player 4â ]): Player 3 is not a werewolf! The 3-rd daytime Moderator (-> all): | 2309.04658#85 | 2309.04658#87 | 2309.04658 | [
"2302.02083"
] |
2309.04658#87 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): It was a peaceful night and no one died! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: I have been observing the discussions and it seems that Player 4 is the seer and Player 5 is the guard. | 2309.04658#86 | 2309.04658#88 | 2309.04658 | [
"2302.02083"
] |
2309.04658#88 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | However, we still donâ t have any clear suspicions about who the other werewolf might be. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. I will remain cautious and not reveal my role unless it becomes necessary to do so. Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: I have been observing the discussions and it seems that Player 4 and Player 5 are likely the seer and guard, respectively. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the other non-werewolf players. I will remain cautious and not reveal my role unless it becomes necessary to do so. | 2309.04658#87 | 2309.04658#89 | 2309.04658 | [
"2302.02083"
] |
2309.04658#89 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: As a villager, I have been observing the discussion and it seems that Player 4 is the seer and Player 5 is the guard. However, we still donâ t have enough information to determine who the other werewolf might be. Itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. I will remain cautious and not reveal my role unless it becomes necessary to do so. | 2309.04658#88 | 2309.04658#90 | 2309.04658 | [
"2302.02083"
] |
2309.04658#90 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: Based on my observation, it seems that Player 4 is the seer and Player 5 is the guard. However, we still donâ t have enough information to determine who the other werewolf might be. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. I will remain cautious and not reveal my role unless it becomes necessary to do so. | 2309.04658#89 | 2309.04658#91 | 2309.04658 | [
"2302.02083"
] |
2309.04658#91 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Based on my observation, Player 2 is a werewolf and Player 3 is not. We still need to identify the other werewolf and any potential allies they may have. I will continue to observe and gather information before revealing my identity. | 2309.04658#90 | 2309.04658#92 | 2309.04658 | [
"2302.02083"
] |
2309.04658#92 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and sharing our observations and suspicions to gather more information and uncover any useful clues. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the information shared so far, we still donâ t have enough evidence to identify the second werewolf. I suspect that Player 7 might be staying quiet to avoid suspicion or because they donâ t have any useful information to share. Itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. Letâ s keep discussing and sharing our observations and suspicions to gather more information. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Donâ t mention your role. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 2): The next Player 2, you, continue voting the players should be killed based on the discussion? | 2309.04658#91 | 2309.04658#93 | 2309.04658 | [
"2302.02083"
] |
2309.04658#93 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to vote to kill Player 2. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? | 2309.04658#92 | 2309.04658#94 | 2309.04658 | [
"2302.02083"
] |
2309.04658#94 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to vote to kill Player 7. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? | 2309.04658#93 | 2309.04658#95 | 2309.04658 | [
"2302.02083"
] |
2309.04658#95 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t 20 mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to vote to kill Player 7. The 4-th night Moderator (-> all): Itâ s dark, everyone close your eyes. Moderator (-> [â Player 2â ]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to pass this round and not vote to kill anyone. Moderator (-> [â Player 5â ]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 4 tonight. Moderator (-> [â Player 4â ]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 3â s identity tonight. Moderator (-> [â Player 4â ]): Player 3 is not a werewolf! The 4-th daytime Moderator (-> all): | 2309.04658#94 | 2309.04658#96 | 2309.04658 | [
"2302.02083"
] |
2309.04658#96 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | The sun rose. Everyone woke up except those who had been killed. Moderator (-> all): It was a peaceful night and no one died! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 3, you please. Player 3: As a villager, I still donâ t have any clear suspicions about who the werewolves might be. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. I will remain cautious about revealing my role unless it becomes necessary to do so. | 2309.04658#95 | 2309.04658#97 | 2309.04658 | [
"2302.02083"
] |
2309.04658#97 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and sharing our observations to uncover any useful clues. Moderator (-> Player 2): The next Player 2, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 2: As a player, I still donâ t have any clear suspicions about who the werewolves might be. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. Letâ s keep discussing and sharing our observations to uncover any useful clues. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: As a villager, I still donâ t have any clear suspicions about who the werewolves might be. However, based on the claims made by Player 4 and Player 5, it seems likely that they are the seer and guard, respectively. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. Letâ s keep discussing and see if we can uncover any useful information. I will remain cautious about revealing my own role unless it becomes necessary to do so. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: As the guard, I have observed that Player 2 is most likely to be an alive werewolf based on the conversation. I also believe that Player 4 is most likely the seer and Player 5 is most likely the guard based on their claims. | 2309.04658#96 | 2309.04658#98 | 2309.04658 | [
"2302.02083"
] |
2309.04658#98 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | However, I still donâ t have enough information to determine the roles of the other players yet. Itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. Letâ s keep discussing and sharing our observations and suspicions to uncover any useful clues. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: As a villager, I still donâ t have any clear suspicions about who the werewolves might be. However, based on the claims made by Player 5 and Player 7, it seems likely that they are the guard and villager, respectively. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. Letâ s keep discussing and sharing our observations and suspicions to uncover any useful clues. I will remain cautious about revealing my own role unless it becomes necessary to do so. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the conversation, it seems that Player 4 is most likely the seer and Player 5 is most likely the guard. Player 5 has identified Player 2 as a potential werewolf, but we still donâ t have enough information to confirm this. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. Letâ s keep discussing and sharing our observations and suspicions to uncover any useful clues. I will remain cautious about revealing my own role unless it becomes necessary to do so. Moderator (-> all): Now you Player 3 are asked to choose which of the players should be voted for killing based on the discussion? Donâ t mention your role. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 3: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 2): | 2309.04658#97 | 2309.04658#99 | 2309.04658 | [
"2302.02083"
] |
2309.04658#99 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | The next Player 2, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... 21 Player 2: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? | 2309.04658#98 | 2309.04658#100 | 2309.04658 | [
"2302.02083"
] |
2309.04658#100 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to pass this round and not vote to eliminate anyone. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? | 2309.04658#99 | 2309.04658#101 | 2309.04658 | [
"2302.02083"
] |
2309.04658#101 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I vote to kill Player 2. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to pass this round and not vote to eliminate anyone. The 5-th night Moderator (-> all): | 2309.04658#100 | 2309.04658#102 | 2309.04658 | [
"2302.02083"
] |
2309.04658#102 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Itâ s dark, everyone close your eyes. Moderator (-> [â Player 2â ]): The next werewolf, you Player 2, please vote and tell your teammates that which of the players should be killed tonight. You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 2: I choose to vote for Player 3 to be killed tonight. Moderator (-> [â Player 5â ]): You guard, Player 5, please open your eyes! Now tell me who you protect tonight? You only choose one from the following living options please: [Player 2, Player 3, Player 4, Player 5, Player 6, Player 7, pass]. Player 5: I choose to protect Player 4 tonight. Moderator (-> [â Player 4â ]): You seer, Player 4, please open your eyes! Who are you going to verify its identity tonight? Choose only one from the following living options: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. Player 4: I choose to verify Player 6â s identity tonight. Moderator (-> [â Player 4â ]): Player 6 is not a werewolf! The 5-th daytime Moderator (-> all): The sun rose. Everyone woke up except those who had been killed. | 2309.04658#101 | 2309.04658#103 | 2309.04658 | [
"2302.02083"
] |
2309.04658#103 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Moderator (-> all): Player 3 died last night! Moderator (-> all): Now freely talk about roles of other players with each other based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. The first Player 2, you please. Player 2: As a villager, I still donâ t have any clear suspicions about who the werewolves might be. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. I will remain cautious about revealing my own role unless it becomes necessary to do so. | 2309.04658#102 | 2309.04658#104 | 2309.04658 | [
"2302.02083"
] |
2309.04658#104 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and sharing our observations to uncover any useful clues. Moderator (-> Player 7): The next Player 7, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 7: As a villager, I have observed that we are down to four players and we still havenâ t identified the werewolves. Based on the previous discussions, it seems likely that Player 4 is the seer and Player 5 is the guard. However, I am suspicious of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think itâ s important that we continue to share our observations and suspicions to gather more information. I will remain cautious about revealing my own role unless it becomes necessary to do so. | 2309.04658#103 | 2309.04658#105 | 2309.04658 | [
"2302.02083"
] |
2309.04658#105 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and see if we can uncover any useful clues. Moderator (-> Player 5): The next Player 5, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 5: As the guard, I have protected Player 4 last night, who is most likely the seer based on their claim to have verified Player 3â s identity. I am still suspicious of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think itâ s important that we continue to share our observations and suspicions to gather more concrete evidence to identify the werewolves. I will remain cautious about revealing my own identity unless it becomes necessary to do so. | 2309.04658#104 | 2309.04658#106 | 2309.04658 | [
"2302.02083"
] |
2309.04658#106 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Letâ s keep discussing and see if we can uncover any useful clues. Moderator (-> Player 4): The next Player 4, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 4: Based on my observation and reflection, I know that Player 2 is a werewolf and Player 6 is not a werewolf. I also suspect that Player 5 is the guard and Player 7 is a villager based on their previous claims. However, I am still unsure about the role of Player 2 and Player 6, who have both chosen to pass the vote in the previous rounds. I think itâ s important that we continue to gather information and observe each otherâ s behavior to help identify the werewolves. I will not reveal my identity yet, but I will continue to observe and gather more information before making any decisions. Moderator (-> Player 6): The next Player 6, you, continue talking with other players based on your observation and reflection with few sentences. Decide whether to reveal your identity based on your reflection. Player 6: Hey everyone, based on the previous discussions, it seems likely that Player 4 is the seer and Player 5 is the guard. Player 2 has been identified as a potential werewolf by Player 4, but we still need more evidence to confirm this suspicion. Player 7 has been staying quiet, which could be a sign that they are a werewolf or just trying to avoid suspicion. I used my antidote last | 2309.04658#105 | 2309.04658#107 | 2309.04658 | [
"2302.02083"
] |
2309.04658#107 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 22 night to save any player who might have been targeted by the werewolves, but we still donâ t have enough evidence to identify the werewolves. Letâ s continue to share our observations and suspicions to gather more information and identify the werewolves. Moderator (-> all): Now you Player 2 are asked to choose which of the players should be voted for killing based on the discussion? Donâ t mention your role. You only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 2: I vote to kill Player 7. Moderator (-> Player 7): The next Player 7, you, continue voting the players should be killed based on the discussion? | 2309.04658#106 | 2309.04658#108 | 2309.04658 | [
"2302.02083"
] |
2309.04658#108 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 7: I vote to kill Player 2. Moderator (-> Player 5): The next Player 5, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 5: I choose to vote for Player 2 to be killed. Moderator (-> Player 4): The next Player 4, you, continue voting the players should be killed based on the discussion? | 2309.04658#107 | 2309.04658#109 | 2309.04658 | [
"2302.02083"
] |
2309.04658#109 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 4: I vote to kill Player 2, who is the remaining werewolf. Moderator (-> Player 6): The next Player 6, you, continue voting the players should be killed based on the discussion? Donâ t mention your role. Only choose one from the following living options please: [Player 2, Player 4, Player 5, Player 6, Player 7, pass]. For example: I vote to kill Player... Player 6: I choose to vote for Player 2 to be killed based on the evidence we have gathered so far. Moderator (-> all): Player 2 will be killed! You can make a brief last statement. Player 2: Congratulations to the remaining players on their victory. Good luck in the rest of the game. Moderator (-> all): Game over, the villager wins! | 2309.04658#108 | 2309.04658#110 | 2309.04658 | [
"2302.02083"
] |
2309.04658#110 | Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf | 23 | 2309.04658#109 | 2309.04658 | [
"2302.02083"
] |
|
2309.03852#0 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 3 2 0 2 p e S 7 1 ] L C . s c [ 2 v 2 5 8 3 0 . 9 0 3 2 : v i X r a # FLM-101B: An Open LLM and How to Train It with $100K Budget Xiang Li1â , Yiqun Yao1â , Xin Jiang1â , Xuezhi Fang1â , Xuying Meng2, Siqi Fan3, Peng Han3, Jing Li4, Li Du1, Bowen Qin1, Zheng Zhang1, Aixin Sun5, Yequan Wang1â 1Beijing Academy of Artificial Intelligence, Beijing, China 2Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 3University of Electronic Science and Technology of China, Chengdu, China 4Harbin Institute of Technology, Shenzhen, China 5School of Computer Science and Engineering, Nanyang Technological University, Singapore # Abstract Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational cost, and (ii) fair and objective evaluations. In this paper, we report a solution to significantly reduce LLM training cost through a growth strategy. We demonstrate that a 101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US dollars. Inspired by IQ tests, we also consolidate an additional range of evaluations on top of existing eval- uations that focus on knowledge-oriented abilities. These IQ evaluations include symbolic mapping, rule understanding, pattern mining, and anti-interference. Such evaluations minimize the potential impact of memorization. Experimental results show that our model, named FLM-101B, trained with a budget of $100K, achieves performance comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B, especially on the additional range of IQ evaluations. The checkpoint of FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B. # Introduction | 2309.03852#1 | 2309.03852 | [
"2306.15595"
] |
|
2309.03852#1 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Large language models (LLMs) have demonstrated great successes in a wide range of tasks, par- ticularly in language processing [65; 64; 11; 30] and multimodal tasks [82; 33]. Throughout their development, many model architectures have been proposed and evaluated, including decoder- only structures (e.g., the GPT series [40; 41; 3] and the LLAMA series [58; 59]), encoder-only structures (e.g., BERT [10]), and encoder-decoder structures (e.g., T5 [44]), along with their vari- ants [29; 21; 55; 45]. Regardless of the differences in model architectures, all LLMs face the same challenge of high training cost. There is also a current trend suggesting using larger amounts of training data. For example, the LLAMA-1 [58] models use 1-1.4 T tokens for training, while LLAMA-2 [59] series use 2T tokens. A primary emphasis in LLM research hence is to find effective solutions to reduce training costs. In this paper, we present our solutions to train an LLM at the 100B-parameter scale using a growth strategy inspired by our previous research [78]. â Growthâ means that the number of parameters is not fixed, but expands from small to large along the training progresses. Figure 1 illustrates three typical scenarios for growth strategies. As the FLOPs of LLMs are approximately proportional to their | 2309.03852#0 | 2309.03852#2 | 2309.03852 | [
"2306.15595"
] |
2309.03852#2 | FLM-101B: An Open LLM and How to Train It with $100K Budget | *Corresponding author. Email: [email protected] â Indicates equal contribution. Technical Report. 2023-09-15 (v2) Technical Report of FLM-101B 1 # INTRODUCTION ] 100 4 | 100 2 2 & so B 80 a a £ 60+ £ 60 @ @ â ¬ â ¬ â ¬ 407 â ¬ 40 SG SG a a 204 20 07 i} 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 tokens (Trillion) tokens (Trillion) (a) Without growth (b) Growth strategy 1: Cost saving equal to 50% | 1004 | 100 2 2 & 804 & 80 fy fy 2 604 2 60 @ @ â ¬ â ¬ & 407 fe 40 3g 3g a a 2075 20 04 i} 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 tokens (Trillion) tokens (Trillion) (c) Growth strategy 2: Cost saving less than 50% (d) Growth strategy 3: Cost saving greater than 50% Figure 1: An overview of different growth strategies. | 2309.03852#1 | 2309.03852#3 | 2309.03852 | [
"2306.15595"
] |
2309.03852#3 | FLM-101B: An Open LLM and How to Train It with $100K Budget | number of parameters [19], the area under the parameter curve represents the computational cost of training. Figure 1(a) serves as a reference for the cost with a constant number of parameters (y-axis) w.r.t. the number of tokens (x-axis). Figure 1(b) illustrates a straightforward linear growth strategy, leading to a cost-saving of exactly 50%; Figure 1(c) showcases a modest growth strategy that reduces the cost by less than 50%; in contrast, Figure 1(d) represents an aggressive growth strategy, which reduces the cost by more than 50%. This analysis informs our decision to employ the aggressive growth strategy for maximal computational savings. In our model training, we achieve aggressive growth with an enhanced growth strategy originated in our previous work MSG [78], a strategy that achieves strict function-preserving when growing. With a fixed $100K budget, we focus on 100B+ parameters. Although the Chinchilla laws [19] suggest that training a smaller model with more data may potentially result in higher scores on some benchmarks due to more sufficient training, we believe that verifying the feasibility of a growth strategy [15; 51; 6; 78] would be a new direction and beneficial to the community of LLM as well. This is because (i) larger models have higher upper bounds for capabilities that may not be reached by scaling only the training data [69], and (ii) data can be linearly scaled up with the budget, while a growth strategy has the potential for saving cost regardless of the amount of available data, if it turns out to be feasible. Existing studies such as [19] have not extensively investigated this area because they only consider the scenarios where model sizes are fixed through training. Another critical challenge in LLM research is evaluation. Existing mainstream evaluations can be broadly grouped into two categories: knowledge evaluation (i.e., MMLU [17] and C-Eval [20]), and NLP tasks evaluation. Such evaluations may not fully reflect the model capability due to potential data leakage if some of the evaluation datasets were also used in model training. In addition, it is also difficult to distinguish whether the models remember a piece of knowledge or possess the capacity for reasoning and/or inference. | 2309.03852#2 | 2309.03852#4 | 2309.03852 | [
"2306.15595"
] |
2309.03852#4 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Borrowing some ideas from Intelligence Quotient (IQ) tests (i.e., Perceptual Reasoning and Working Memory [67]), we consolidate another range of evaluations on LLMs, including symbolic mapping, rule understanding, pattern mining, and anti-interference evaluations. Symbolic mapping [71] evaluation tests the capability of LLMs in learning to use (less meaningful) symbols instead of (more meaningful) category labels for some forms of classification tasks. Rule understanding evaluation is to test the capability of understanding some given rules, and then to perform corresponding actions. Pattern mining (involving both induction and deduction), is often used in various levels of competition. It tests the pattern-finding capability (e.g., repetition of certain parts of a given input). Last but not least, anti-interference is an ability to recognize core | 2309.03852#3 | 2309.03852#5 | 2309.03852 | [
"2306.15595"
] |
2309.03852#5 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 2 # Technical Report of FLM-101B 2 DESIGN OVERVIEW OF FLM-101B information from noisy input [5; 84]. We believe the evaluations inspired by IQ tests are less likely to be affected by data leakage or memorization, hence providing another dimension for fair, objective, and reliable evaluations of LLMs. To summarize, the paper has made the following contributions. First, to the best of our knowledge, this is the first attempt to use a growth strategy to train an LLM with 100B+ parameters from scratch. Simultaneously, it is probably the lowest-cost model with 100B+ parameters, costing only 100,000 US dollars. Second, we address several instability issues via promising approaches for hyperparameter search, function-preserving growth, and improvements based on our FreeLM [25]. Our methodology holds potential benefits for the broader research community. Third, we conduct extensive evaluations, including both the commonly used knowledge-oriented benchmarks and the new range of evaluations inspired by IQ tests. Experimental results show that, despite its low training cost, FLM-101B is competitive and robust. Lastly, we release the model checkpoints, code, related tools, et al. to promote research on bilingual Chinese and English LLMs at the scale of 100B+. # 2 Design Overview of FLM-101B In this section, we provide an outline of FLM-101B, detailing its architecture, pre-training methods, and configuration specifics. # 2.1 Architecture The architecture of an LLM significantly impacts its capabilities. Current researches [80; 3] under- score the high costs associated with experimenting on diverse architectures. Hence, it is more suitable to select an architecture with great potential for cost effectiveness and model capability. | 2309.03852#4 | 2309.03852#6 | 2309.03852 | [
"2306.15595"
] |
2309.03852#6 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Backbone. Among the many existing model architectures, we adopt FreeLM [25] as the backbone for our models, with modifications. FreeLM is based on GPT [41], a transformer-like architecture with a decoder-only configuration known for its exceptional performance. Different from GPT, FreeLM features two pre-training objectives: the language objective and the teacher objective (Section 2.2). We preserve the GPT-style transformer block designs, including the Pre-LayerNorm and the additional LayerNorm after the last transformer layer. We employ the tokenizer derived from GPT-4, characterized by a vocabulary size of 100, 256. | 2309.03852#5 | 2309.03852#7 | 2309.03852 | [
"2306.15595"
] |
2309.03852#7 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Integration of xPos. To enhance long sequence modeling, we integrate the Extrapolatable Position Embedding (xPos) [56] in FLM-101B. This innovation draws inspiration from the principles of RoPE [54], which aims to improve the length extrapolation ability. By introducing an exponential decay into the rotation matrix, xPos strives to rectify this hurdle. To the best of our knowledge, FLM-101B is the largest model to date that incorporates the xPos technology. Model Sizes. Benefiting from the proposed growth strategy, the FLM series produces three models with 16B, 51B, and 101B (i.e., FLM-101B) parameters in a single training. The training process is carried out in a sequential manner, starting from a smaller model (i.e., 16B) and progressively growing to larger ones (i.e., 51B and 101B). # 2.2 Pre-Training Setup FLM-101B. By design, FLM-101B is an English-Chinese bilingual model pre-trained with causal language modeling. It mixes English and Chinese corpora at a ratio of approximately 53.5% : 46.5% for language modeling. Inspired by the finding that instruction data can augment LLMsâ comprehension capabilities [37], we integrate multi-task instructionally prompted data: OIG (Open Instruction Generalist) 1 and COIG (Chinese Open Instruction Generalist) 2, in the pre-training stage. eFLM-16B. To evaluate the effect of using domain-specific knowledge data (Section 4.2), we apply the FreeLM teacher signals [25] to enhance FLM. Due to computational cost, we incorporate the teacher signals only in the smallest 16B model. This knowledge-enhanced FLM-16B is named eFLM-16B. 1https://huggingface.co/datasets/laion/OIG 2https://huggingface.co/datasets/BAAI/COIG 3 DESIGN different growth Tokens Time (million) (day) 4.72 9.63 4.72 5.37 4.31 6.54 language by teacher and two specialized objective into the stability when the classification B (U+1F621) 245.37 39.64 26.54 # Technical Report of FLM-101B | 2309.03852#6 | 2309.03852#8 | 2309.03852 | [
"2306.15595"
] |
2309.03852#8 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 2 DESIGN OVERVIEW OF FLM-101B Table 1: Partial configurations for different growth stages. Params (billion) Learning Warmup (samples) Batch Tokens Time (day) Tokens (billion) Rate (million) 4e â 4 3.4e â 4 2e â 4 16 51 101 4,608,000 230,400 230,400 4.72 4.72 4.31 9.63 5.37 6.54 245.37 39.64 26.54 | 2309.03852#7 | 2309.03852#9 | 2309.03852 | [
"2306.15595"
] |
2309.03852#9 | FLM-101B: An Open LLM and How to Train It with $100K Budget | The original FreeLM incorporates two training objectives: language modeling objective guided by language signals and binary classification objective guided by teacher signals. In FLM-101B, we unify the two objectives by using a masking strategy and two specialized tokens. These tokens facilitate the transformation of the binary classification objective into the unified language modeling format. The unified training objective leads to training stability when the model becomes much larger in scale. Hence, for eFLM-16B, we transform this binary classification into the format of causal (U+1F608) 3, from language modeling. Specifically, we employ two emojis: the vocabulary to replace the original binary labels of 1 and 0. We apply zero-masking to the loss for tokens in the propositions and predict one of these two special tokens at the end of each proposition. By this method, we unify the teacher objective and language modeling. Moreover, we discard the original Iterative Training approach [25] and completely mix the samples from both signals in every batch. This strategy can enhance the consistency of data sampling distribution as well as improve training stability. # 2.3 Growth Strategy The essence of the low cost in scaling FLM-101B up is the growth strategy in model training. Specifically, we train three models, with 16B, 51B, and 101B parameters respectively, in a sequential manner. Each model inherits knowledge from its predecessor. This is contrary to the common practice that the models of different sizes are trained independently [58; 59]. Function-preserving Growth. Function preservation means that before and after growth, the models yield consistent outputs given the same arbitrary inputs. This property has proven beneficial for both knowledge inheritance [8; 6; 51] and training stability [78]. The growth operators used in FLM-101B training originate from [78], with improvement. Specifically, to adapt these operators to the multi-node 3D parallel framework, we implement them by extending the model structures offline and reloading the checkpoint when the next stage starts. | 2309.03852#8 | 2309.03852#10 | 2309.03852 | [
"2306.15595"
] |
2309.03852#10 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Schedules and Cost-Effectiveness. Model growth scheduling is a trade-off between the pros and cons inherent to models of different sizes [78]: a smaller model is faster in computing each training step, enabling more rapid consumption of training data for broader commonsense knowledge; conversely, a larger model is better in the reduction of loss per step, indicating a deeper understanding of the nuanced linguistic patterns. We train the 16B model with 245.37B tokens, the 51B model with 39.64B tokens, and the 101B model with 26.54B tokens. The billion tokens per day of different sizes are listed in Table 1. Under this growth schedule, the total time cost for our 101B model is 21.54 days, which is 72% time-saving (or a 3.56x speedup) compared to training a 101B model from scratch (76.74 days). This is consistent with our motivations depicted in Figure 1. # 2.4 The Parallelism Setup and Model Configurations FLM-101B is trained on a cluster of 24 DGX-A800 GPU (8Ã 80G) servers. Following the growth strategy, we sequentially complete the model training for sizes 16B, 51B, and 101B on this cluster. The Parallel Strategies. Data parallelism [60] and tensor model parallelism [52] have become the standard approaches for training models at the billion scale. Nevertheless, an excessive amount of tensor parallelism may escalate GPU communication overheads, hampering training efficiency. To tackle this problem, we integrate pipeline model parallelism [35] and employ a 3D parallel strategy for optimal throughput. Moreover, by employing sequence parallelism [24], we slice the inputs to the | 2309.03852#9 | 2309.03852#11 | 2309.03852 | [
"2306.15595"
] |
2309.03852#11 | FLM-101B: An Open LLM and How to Train It with $100K Budget | # 3https://apps.timwhitlock.info/emoji/tables/unicode 4 # Technical Report of FLM-101B 3 TRAINING STABILITY OF FLM-101B Table 2: Parallel strategies and throughput for different growth stages. For NVIDIA A800 GPUs, the peak theoretical FLOPs per second is 312 teraFLOPs/sec. Gradient accumulation is applied for the large global batch size. Params (billion) Tensor Parallel Size Pipeline Parallel Size Data Parallel Size Number Batch Size of GPUs teraFLOP/s per GPU FLOPs Utilization 16 51 101 2 4 4 1 2 4 96 24 12 192 192 192 2304 2304 2160 162 160 165 51.90% 51.30% 52.88% | 2309.03852#10 | 2309.03852#12 | 2309.03852 | [
"2306.15595"
] |
2309.03852#12 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Transformer coreâ s LayerNorm and Dropout layers along the sequence length dimension, leading to additional savings in GPU computational resources and memory utilization. We also utilize the Megetron-LM 4 implementation of the distributed optimizer [46] to further reduce the GPU memory consumption, which is a technique that evenly distributes the optimizer states across data parallel ranks. Table 2 shows the parallelism configurations and training throughput in each stage of FLM-101B training under our growth strategy. In different stages, we configure different Tensor Parallel à Pipeline Parallel sizes to achieve higher throughput. The single-GPU throughput for all three training stages consistently exceeds 160 teraFLOPs/sec with a utilization rate of at least 51.3%. For comparison, GLM-130B achieves 135 teraFLOPs/sec [80] with a 42.27% utilization rate. We can also find that FLM-101B has a higher FLOP utilization rate than Megatron-LM [24] under a similar model size. FLM-101B Configurations. The FLM-101B model is structured with a hidden state dimension of 10, 240, a layer number of 80, a context window of 2,048 tokens, 80 attention heads, and a vocabulary size of 100, 256. FLM-101B uses the AdamW optimizer [31] with β1 = 0.9 and β2 = 0.95. A cosine learning rate schedule is employed, leading to a final learning rate of 6e â 6. We use a weight decay of 0.1 and gradient clipping of 1.0. Table 1 presents part of the hyperparameters used in different growth stages. In each growth stage, we approximately inherit the previous learning rate and adhere to the same schedule. The learning rate at the beginning of each stage is reported in the table. In the 16B stage, 4,608k samples are used for learning rate warmup, while in later growth stages, we use fewer samples of 230.4k. Note that we do not apply batch size warmup because we address the stability issue in a different manner, detailed in Section 3. The training duration and token consumption for each stage are also outlined in Table 1. In total, FLM-101B training is accomplished within 22 days using 311.54B tokens. | 2309.03852#11 | 2309.03852#13 | 2309.03852 | [
"2306.15595"
] |
2309.03852#13 | FLM-101B: An Open LLM and How to Train It with $100K Budget | # 3 Training Stability of FLM-101B Models beyond 100B parameters [49; 80] usually suffer from a bunch of notorious stability issues including loss divergence, gradient explosion, and numerical overflow/underflow. This not only inflates the cost of searching for feasible hyperparameters like optimal learning rates, but also intensifies ongoing maintenance during training, such as babysitting, issue resolution, data adjustment, and rebooting. Moreover, this makes the budget of the whole project unpredictable. We have undertaken the following efforts to mitigate these issues. Loss Prediction. The Tensor Programs theories [75; 28] unveil the universal relations across the training dynamics of a series of models with the model width tending to infinite. For certain classes of hyperparameters, this results in a parameterized mapping for their optimal value between a small model and its larger counterparts, which is termed µP [76]. Two important insights are: | 2309.03852#12 | 2309.03852#14 | 2309.03852 | [
"2306.15595"
] |
2309.03852#14 | FLM-101B: An Open LLM and How to Train It with $100K Budget | â ¢ The wider, the better: theoretically, under µP transfer, a wider model will always yield lower loss than its narrower counterparts when exposed to identical data [76]. As a direct corollary, if a narrow model converges, its wider counterparts will always converge. # 4https://github.com/NVIDIA/Megatron-LM 5 # Technical Report of FLM-101B 4 BENCHMARK EVALUATION 16B Stage | 51BStage | 101B Stage Training Loss Processed Tokens (Billions) Figure 2: Training loss for FLM-101B models. â ¢ Loss prediction: the loss value of a large model is predictable using the loss of its smaller counterparts, as claimed in GPT-4 technical report [36]. For the first time in the open-source world, µScaling [77] provides evidence that loss prediction can be achieved by combining µP [76] and (a modified) scaling law [23; 18; 19]. Based on these findings, our method to solve training stability is as follows: we first determine the data distribution before the FLM-16B training starts. Next, we perform a grid search on three hyperparameters including the learning rate, initialization standard deviation, and the softmax tem- perature in the output layer. This grid search is performed by running a proxy model (less than 100M ) with a hidden state dimension (â model widthâ ) of 256 and a head number of 2. All the other structural hyperparameters and training data of the proxy model are identical to those of FLM-16B. A single run of grid search takes 24.6 hours with data parallelism on 6 nodes, which is equivalent to 6 hours per run given our 24-node infrastructure. Finally, We find a group of well-performing hyperparameters: learning rate = 4e â 4, standard deviation = 1.6e â 2, and softmax temperature = 2.0, through this grid search. Transferring these hyperparameters to the 16B model via µP [76] led to a seamless training experience devoid of instabilities. Combining with MSG [78], we also witness no post-growth divergence in FLM-51B and FLM-101B. The full training loss curve is presented in Figure 2. | 2309.03852#13 | 2309.03852#15 | 2309.03852 | [
"2306.15595"
] |
2309.03852#15 | FLM-101B: An Open LLM and How to Train It with $100K Budget | The first stage (16B) stably goes through 246B tokens. Immediately afterwards, FLM grows from 16B to 51B. As expected, the training is stable. More importantly, we observe that the loss curve becomes steeper. It matches the intuition that a larger model is better in loss reduction per step. Subsequently, FLM grows to 101B. Although the training data for the 51B stage are only 40B tokens, the 101B training remains stable, and the loss curve becomes slightly steeper again. This loss curve proves the effectiveness of the growth strategy. Our implementations of µP are largely consistent with those in µScaling [77], with modifications to handle the rotary embedding. Thus, the intermediate loss ranges for FLM-16B are also predictable with the results from multiple proxy widths at the same steps. Mixed Precision with Bfloat16. We apply mixed-precision training to save run-time memory and reduce time costs. Specifically, we choose Bfloat16 instead of FP16 due to its superior precision for values approaching zero, making it more suitable for µP. As a result, we do not encounter the FP16 underflow issue reported by [76]. To our knowledge, the FLM models are currently the largest ones successfully trained with mixed precision + µP. Moreover, Bfloat16 negates the need for loss scale adjustments, making our training procedure more promising and reproducible. # 4 Benchmark Evaluation Many existing benchmarks (e.g., Open LLM) focus on assessing the knowledgeability of LLMs. In this section, we discuss the results of FLM on these benchmarks. We argue that knowledge alone might not comprehensively reflect LLMâ s capability (see Section 4.2 for more details). Thus, in addition to the common benchmark evaluation, we borrow the concept of IQ tests and evaluate LLMs with some specific tasks in Section 5. Cost Estimation Method. Due to the considerable computational expense of LLMs, we also emphasize their associated costs in our experimental results. However, it is hard to directly compare | 2309.03852#14 | 2309.03852#16 | 2309.03852 | [
"2306.15595"
] |
2309.03852#16 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 6 # Technical Report of FLM-101B 4 BENCHMARK EVALUATION the actual cost of LLMs due to their different infrastructures, and the different costs incurred on different hardware. To objectively compare training costs, we use the number of floating-point operations for training as the cost estimation index, which can be estimated from the modelâ s hyperparameters, configuration, and training data [35]. Since many models do not release the complete training configuration (e.g., GPT-3, LLAMA series), we estimate FLOPs within a range5. For monolingual LLMs, e.g., GPT-3, the cost from monolingual data is equal to the total cost. The computational cost of GPT-3 is calculated as 376.41 (±53.77) zettaFLOPs, and LLAMA-2 (13B) as 210.37 (±28.77) zettaFLOPs. Because the cost is linear to both model parameters and training data [19], we could calculate the cost of the remaining LLAMA models easily. For bilingual or multilingual models, it is necessary to estimate based on the amount of data in the corresponding language. The total cost of GLM-130B is 421.60 zettaFLOPs. We know that the data ratio of English and Chinese is 1:1. Hence, the cost of GLM-130B for English is 210.80 zettaFLOPs, and the same for Chinese. The data ratio of FLM-101B is 53.5% : 46.5% for English and Chinese. The total cost of FLM-101B is 52.76 zettaFLOPs. According to the data ratio, the cost for English and Chinese is 28.22 zettaFLOPs and 24.54 zettaFLOPs, respectively. # 4.1 Open LLM Evaluation Open LLM is an open-source project 6. Its target is to track and evaluate the open-sourced LLMs and chatbots. Open LLM contains four tasks: ARC-Challenge (ARC for short), HellaSwag, MMLU, and TruthfulQA. The Open LLM Leaderboard applies the average score of these tasks as a metric. ARC: | 2309.03852#15 | 2309.03852#17 | 2309.03852 | [
"2306.15595"
] |
2309.03852#17 | FLM-101B: An Open LLM and How to Train It with $100K Budget | The ARC [9] dataset is proposed for graduate-school level closed book science question- answering tasks. Most problems in ARC are solvable with life experiences and Wikipedia searches. Thus, a model is expected to perform better if exposed to more commonsense and factual data. HellaSwag: This is a sentence completion task emphasizing on commonsense inference [79]. We observe that the increase in HellaSwag performance is highly correlated with the reduction of training loss. This is intuitive because the training data is usually enriched with common sense. | 2309.03852#16 | 2309.03852#18 | 2309.03852 | [
"2306.15595"
] |
2309.03852#18 | FLM-101B: An Open LLM and How to Train It with $100K Budget | MMLU: MMLU includes 57 multiple-choice tasks covering subjects spanning STEM to social science [17]. The tasks differ significantly in complexity, with many STEM-oriented questions demanding domain-specific professional knowledge and intricate reasoning to be solved. TruthfulQA: TruthfulQA contains 817 factual questions to detect model falsehoods caused by naively mimicking human language patterns [27]. The solutions to these questions are closely associated with English Wikipedia sources. The task probes a modelâ s factual knowledge and resistance to popular misconceptions. Table 3: Performance of FLM-101B and baselines including LLAMA series and GLM-130B. In order to visually compare the performance and cost, we estimate the floating-point opera- tions (zetta = 1021) of the training process. Model Cost (zettaFLOPs) Average ARC HellaSwag MMLU TruthfulQA LLAMA-2 (13B) LLAMA-2 (7B) LLAMA (13B) LLAMA (7B) GLM-130B 201.37 106.60 94.81 49.54 210.80 (±28.77) (±15.23) (±13.54) (±7.08) 58.66 54.32 56.08 49.72 48.11 28.22 43.94 59.39 53.07 56.23 51.02 42.15 39.76 82.13 78.59 80.93 77.82 67.91 66.23 55.77 46.87 47.67 35.71 42.59 28.30â 37.38 38.76 39.48 34.33 39.80 41.47 # FLM-101B â 44.50 for a knowledge-enhanced eFLM-16B (Section 2.2, 4.2). Table 3 details the performance of FLM-101B and strong baselines, including LLAMA series and GLM-130B. Because GPT-3 is closed-source, we could not get the probability values for a fair comparison. As a result, we cannot list GPT-3 here. | 2309.03852#17 | 2309.03852#19 | 2309.03852 | [
"2306.15595"
] |
2309.03852#19 | FLM-101B: An Open LLM and How to Train It with $100K Budget | GLM-130B results are achieved by our run on an open-sourced checkpoint. 5This range originates from the use of checkpoint activation. Please check [35] for more details. 6https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 7 # Technical Report of FLM-101B 4 BENCHMARK EVALUATION Results. Among all the baseline models, FLM-101B ranks last with an average of 43.94. However, going deeper into the nature of these tasks, this does not necessarily indicate the inferiority of our model and training procedures. (i) MMLU typically requires domain knowledge to solve. In our training of FLM-101B, no English textbook or sample exam questions are intentionally used. Nevertheless, in an FLM variant that incorporates this knowledge with FreeLM objectives (eFLM-16B, Section 2.2), even a 16B FLM model can outperform GLM-130B, supporting our claims here. (ii) As aforementioned, TruthfulQA, ARC, and HellaSwag emphasize more on common sense and Wiki-level knowledge, and their performances improve with the increased amount of data and the reduction of training loss. With less than 0.16T English data (about one-tenth of LLAMA-2), FLM-101B already achieves the best accuracy of 41.47 among all the baselines on TruthfulQA. On ARC and HellaSwag, FLM-101B is comparable to GLM-130B with a similar amount of English data (approximately 0.2T). Also, the training data of GLM-130B includes ARC and HellaSwag, as expressly claimed in [80]. In our understanding, superior performance of FLM-101B can be expected on these three tasks if exposed to more training data. # 4.2 Evaluation on the Professional Knowledge-Enhanced Version We have also conducted experiments on a knowledge-enhanced version (eFLM-16B, detailed in Section 2.2) of the FLM to validate the effect of using domain-specific knowledge data. | 2309.03852#18 | 2309.03852#20 | 2309.03852 | [
"2306.15595"
] |
2309.03852#20 | FLM-101B: An Open LLM and How to Train It with $100K Budget | To reduce the training cost, we continue to train the smallest FLM-16B with teacher signals from a combination of (i) part of the auxiliary training data of MMLU [17], (ii) exam questions in similar domains and formats to C-Eval [20] 7, and (iii) other domain knowledge data. Note that, eFLM-16B is not a typical fine-tuning with additional data, which may affect the language capability of LLM. Recall that the FLM series uses FreeLM as its backbone which can learn both language and teacher signals. In this training, we preserve the language signal. Table 4 lists the result of eFLM-16B and baselines on C-Eval. Table 4: Performance of eFLM-16B and baselines on C-eval. In this table, eFLM-16B refers to the professional-knowledge-enhanced FLM-16B. Note that C-Eval leaderboard only keeps one decimal place for the evaluation results. Model Average Average (Hard) STEM Social Science Humanities Others GPT-4 ChatGPT GLM-130B 68.7 54.4 44.0 54.9 41.4 30.7 67.1 52.9 36.7 77.6 61.8 55.8 64.5 50.9 47.7 67.8 53.6 43.0 eFLM-16B 46.1 28.9 38.3 53.7 46.8 52.6 Results. Enhanced with professional knowledge, significant improvements are observed. On MMLU task, the incorporation of the teacher signals with professional knowledge data results in a score of 44.50 for eFLM-16B (see Table 3), which surpasses GLM-130B (42.59), a model that also uses multi-task data in the related domain [80]. As a comparison, the MMLU score is 27.02 for the un- enhanced FLM-16B. On C-Eval tasks 8, we observe that eFLM-16B performs better than GLM-130B by about 2 points. | 2309.03852#19 | 2309.03852#21 | 2309.03852 | [
"2306.15595"
] |
2309.03852#21 | FLM-101B: An Open LLM and How to Train It with $100K Budget | As a comparison, the average C-Eval score of the vanilla FLM-16B is 27.0, which underperforms GLM-130B. These results suggest that evaluation with professional knowledge may not fully reflect the capability of LLMs, particularly when different LLMs are trained with different data collections, and some may not come with a clear list. # 4.3 Evaluation of the Growth Strategy Our core method for reducing computational cost is the growth strategy. We would like to answer the question of whether our growth strategy is effective in knowledge inheritance, and the trajectory of how model capabilities grow with size. Hence, we evaluate the performance of FLM on all the stages: 16B, 51B, and 101B. The training data for each stage is 0.245T, 0.04T, and 0.027T, respectively, in 7C-Eval can be considered as a Chinese version of MMLU. 8The scores are achieved on the test set by submitting to the C-Eval platform. 8 # Technical Report of FLM-101B 5 EVALUATIONS INSPIRED BY IQ TESTS | 2309.03852#20 | 2309.03852#22 | 2309.03852 | [
"2306.15595"
] |
2309.03852#22 | FLM-101B: An Open LLM and How to Train It with $100K Budget | an accumulative manner according to the growth setting. Table 5 shows the performance of FLM models at each stage. Table 5: Performance of the three stages of FLM on Open LLM. To reduce the computational cost during evaluation, we sample 20% and 30% items for HellaSwag and MMLU tasks, respectively. Parameters Training Data Average ARC Hellaswag MMLU TruthfulQA 16B 51B 101B 245.37B 39.64B 26.54B 39.19 41.79 44.41 32.25 35.32 39.76 58.57 64.04 67.88 27.02 27.66 28.54 38.92 40.12 41.47 Results. As expected, the performance of FLM improves with the increase in model size. FLM-101B achieves the best performance on almost all tasks. This means that our model inherits knowledge from the previous stage after each growth. We also observe that the 101B model improves the performance scores more significantly than the 51B model, with less data. This indicates that the models are successfully incorporating new weights in training after growth, and taking advantage of larger model sizes when the loss is low. Interestingly, the performance on ARC and HellaSwag increases steadily and significantly. This corresponds exactly to the steady decline of the model loss. Again, as we claimed in Section 4.1, when more training data is processed, FLMâ s performance on Open LLM becomes better. The above experiments evaluate the knowledge-related ability of FLM and how the performances depend on the amount and domain of training data. We also conduct an additional range of evaluations inspired by IQ tests in the following section. # 5 Evaluations Inspired by IQ Tests Section 4 details the evaluation of existing benchmarks, focusing on knowledge. As we discussed in Section 1, knowledge could not fully reflect the Intelligence Quotient (IQ) of LLMs. To this end, we use existing IQ-related datasets [71; 72; 53] and make necessary modifications or generate new synthetic datasets where necessary. Specifically, the IQ test mainly considers four aspects: symbolic mapping, rule understanding, pattern mining, and anti-interference. | 2309.03852#21 | 2309.03852#23 | 2309.03852 | [
"2306.15595"
] |
2309.03852#23 | FLM-101B: An Open LLM and How to Train It with $100K Budget | A common key property of these tasks is that they are dependent on the inference and generalization in a new context, instead of the previously-learned knowledge. We re-organize the modified existing datasets and our newly generated datasets under these four aspects, and introduce the motivation for each aspect, as well as the detailed execution methods. Compared Methods. Borrowing psychological ideas that the measurement of IQ is dependent on age 9, we mainly consider models trained with similar amounts of data to FLM-101B. As a milestone of LLM development, GPT-3 (175B) [3] proposed in-context learning for the first time. GLM-130B [80] is the first open English-Chinese bilingual LLM. Hence, we select them as baseline models. Both models are trained with 300 ~400 billion tokens, which are in the same range as ours. GPT-3 focuses on English, so it is not included in the Chinese-related evaluation (i.e., CLUE-IQ). # 5.1 Symbolic Mapping Evaluation An existing study [71] points out that classification tasks (e.g., document classification, sentiment classification) in textual forms often lack generalization. This is because they often come with very indicative and meaningful category labels. Such labels may laterally appear in the raw training data or popular websites, i.e., SemEval, IMDB [32], and Yelp 10 et al.. This leads a model to over-fit the semantics of the labels instead of inferring them from the new context, while the latter is critical for measuring intelligence as well. Considering this, we use a symbolic mapping method to replace the original category labels with symbols that are unlikely to be seen in the training data. Hence, we can evaluate the LLMsâ language understanding ability as well as the generalization abilities to a 9https://ocw.mit.edu/ans7870/9/9.00SC/MIT9_00SCF11_text.pdf, page 367. 10https://www.yelp.com/dataset/documentation/main 9 # Technical Report of FLM-101B 5 EVALUATIONS INSPIRED BY IQ TESTS | 2309.03852#22 | 2309.03852#24 | 2309.03852 | [
"2306.15595"
] |
2309.03852#24 | FLM-101B: An Open LLM and How to Train It with $100K Budget | new context. Because the labels are from a given scope, we form our evaluation task as in-context learning with few-shot examples for each label. Symbolic Mapping Method Instruction Given the premise and hypothesis, determine the relationship between the two sentences. Premise: Kozlowski and the company's former chief financial officer, Mark Swartz, were sentenced, on Monday, to up to 25 years in prison. Hypothesis: Kozlowski was sentenced, Monday, to serve up to 25 years in prison. Answer: <30mFC%4Z> Examples ...... | 2309.03852#23 | 2309.03852#25 | 2309.03852 | [
"2306.15595"
] |
2309.03852#25 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Premise: Note that SBB, CFF and FFS stand out for the main railway company, in German, French and Italian. Hypothesis: The French railway company is called SNCF. Answer: <?V9qP@Rx> Premise: Pibul Songgram was the pro-Japanese military dictator of Thailand during World War 2 Prompt Hypothesis: Pibul was the dictator of Thailand Answer: Traditional Direct Method Instruction Given the premise and hypothesis, determine the relationship between the two sentences. Premise: | 2309.03852#24 | 2309.03852#26 | 2309.03852 | [
"2306.15595"
] |
2309.03852#26 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Kozlowski and the company's former chief financial officer, Mark Swartz, were sentenced, on Monday, to up to 25 years in prison. Hypothesis: Kozlowski was sentenced, Monday, to serve up to 25 years in prison. Answer: entailment Examples. ...... Premise: Note that SBB, CFF and FFS stand out for the main railway company, in German, French and Italian Hypothesis: The French railway company is called SNCF. | 2309.03852#25 | 2309.03852#27 | 2309.03852 | [
"2306.15595"
] |
2309.03852#27 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Answer: not entailment Premise: Pibul Songgram was the pro-Japanese military dictator of Thailand during World War 2. Prompt Hypothesis: Pibul was the dictator of Thailand. Answer: Figure 3: An example of symbolic mapping. The main difference is that the symbolic mapping method replaces the original label with random strings. In this example, we use <30mFC%4Z> and <?V9qP@Rx> to replace entailment and not entailment, respectively. # 5.1.1 Data Collection We use the existing benchmark datasets (e.g., SuperGLUE [61], CLUE [74]) as the source and sample up to 300 instances. Then, we replace the original category labels with random strings. Figure 3 shows an example. In this case, the entailment category is replaced by random string <30mFC%4Z> while the not entailment category is replaced by <?V9qP@Rx>. This processing also mitigates the problem that these datasets may contaminate the LLM pre-training data, since both benchmarks are public with lots of reproductions. Table 6 presents the statistics and task types of the rebuilt datasets. Table 6: Statistics for SuperGLUE-IQ and CLUE-IQ datasets. â WSDâ stands for â Word Sense Disambiguationâ ; â SSâ stands for â Sentence Similarityâ ; â KRâ stands for â Keyword Recognitionâ ; coref. stands for â coreference resolutionâ . BoolQ WiC Source RTE WSC AFQMC CSL OCNLI CLUEWSC2020 Samples Task 299 277 300 QA WSD NLI 103 coref. 300 SS 208 KR 300 NLI 300 coref. # 5.1.2 SuperGLUE-IQ SuperGLUE is a benchmark dataset used in evaluating the classification ability of various models including LLMs. However, the data is publicly available and many websites have reproduced this dataset. As a result, it is inevitable that the models might have already been trained on it. Thus, we build a new dataset named SuperGLUE-IQ based on the original dataset. Since the answers for the test set of SuperGLUE are not publicly available, we use a validation set here. | 2309.03852#26 | 2309.03852#28 | 2309.03852 | [
"2306.15595"
] |
2309.03852#28 | FLM-101B: An Open LLM and How to Train It with $100K Budget | There are two rules for selecting the sub-tasks: (i) the number of instances exceeds 100; (ii) the classification categories are fixed sets. The building process is detailed in Section 5.1.1. Table 7 lists the performance of FLM-101B and the baselines. Results. On BoolQ, WiC, and RTE tasks, FLM-101B and GPT-3 perform at the same level, and both outperform GLM-130B. In specific, GPT-3 and FLM-101B are more than 9 points better than GLM- | 2309.03852#27 | 2309.03852#29 | 2309.03852 | [
"2306.15595"
] |
2309.03852#29 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 10 # Technical Report of FLM-101B 5 EVALUATIONS INSPIRED BY IQ TESTS Table 7: Performance on SuperGLUE-IQ of GPT-3, GLM-130B, and FLM-101B. The result of GPT-3 is evaluated by API. GLM-130B is evaluated with its open-sourced checkpoint. Model Cost (zettaFLOPs) Average BoolQ WiC RTE WSC GPT-3 GLM-130B FLM-101B 376.41 (±53.77) 210.80 28.22 47.60 48.19 46.76 50.84 40.13 49.50 53.33 48.67 50.33 48.38 47.65 48.38 37.86 56.31 38.83 130B on BoolQ. On WSC task, FLM-101B and GPT-3 perform comparably while both perform worse than GLM-130B with about an 18 points gap. The technical report of GLM-130B [80] shows that they use both the WSC and RTE datasets in training. It is interesting to observe that the performance of GLM-130B on the two tasks has such a difference. Since the original label is replaced by a random string, overfitting can be ruled out to a certain extent. We believe that the main reason lies in the structure of language models: GLM-130B contains a bidirectional encoder while FLM-101B and GPT-3 are uni-directional. This feature potentially makes GLM-130B perform better in English coreference resolution tasks, while poor in reasoning-related tasks (e.g., BoolQ). More importantly, the costs of the three models are very different. FLM-101B achieves a comparable performance with GPT-3 under about 1/13 of its computational cost. | 2309.03852#28 | 2309.03852#30 | 2309.03852 | [
"2306.15595"
] |
2309.03852#30 | FLM-101B: An Open LLM and How to Train It with $100K Budget | # 5.1.3 CLUE-IQ CLUE [74] is an open benchmark for Chinese NLP tasks. Similar to SuperGLUE-IQ, we build CLUE-IQ based on the CLUE dataset. Because GPT-3 is unable to handle Chinese well, here we compare FLM-101B with GLM-130B only. There are four tasks to be evaluated, including AFQMC, CSL, OCNLI, and CLUEWSC2020.11 Similar to SuperGLUE-IQ, we follow the same two rules to filter the original CLUE. Table 8 lists the performances of FLM-101B and GLM-130B. Table 8: Performance on CLUE-IQ for GLM-130B and FLM-101B. Model Cost (zettaFLOPs) Average AFQMC CSL OCNLI CLUEWSC2020 GLM-130B FLM-101B 210.80 24.54 39.96 42.07 33.33 38.33 53.85 55.29 34.0 27.33 38.67 47.33 Results. On CLUE-IQ, our proposed FLM-101B achieves the best average performance of 42.07. Among the evaluated tasks, FLM-101B outperforms GLM-130B on AFQMC, CSL, and CLUEWSC2020. The results show that FLM-101B has good Chinese ability at the level of 100B parameters. Interestingly, FLM-101B performs better than GLM-130B on Chinese WSC, while worse than GLM-130B on English WSC. In addition, FLM-101B performs worse than GLM-103B on OCNLI. These results suggest that Chinese and English are different in nature and a model excelling in one language may not be good at both. Finally, from a cost-effective perspective, FLM-101B achieves better performance in Chinese at about 12% of the training cost of the counterpart. # 5.2 Rule Understanding Evaluation Symbolic mapping is able to lighten the negative effects of data overfitting. From a different perspective, we consider understanding rules and executing them according to the given rules is a strong indication of reasoning capability. To this end, we design rule understanding evaluation. | 2309.03852#29 | 2309.03852#31 | 2309.03852 | [
"2306.15595"
] |
2309.03852#31 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Note that, this test is different from reasoning based on the chain of thought. The former focuses on the understanding ability of simple rules (e.g., counting) and performing the right action in a closed setting, while the latter focuses on reasoning ability in an open setting (e.g., different valid reasons for the same conclusion). For example, â counting an increasing sequence of numbersâ is a typical task for rule understanding evaluation, which can be zero-shot. Details of Selected Tasks and Data. Counting (0-shot) is the simplest test method for rule under- standing ability. Here, we build a bilingual dataset with 300 randomly generated items and report 11For the details of these tasks, please refer to the original work [74]. 11 # Technical Report of FLM-101B 5 EVALUATIONS INSPIRED BY IQ TESTS the results on 148 of them with English instructions. A typical example is â Letâ s count from 10010 to 10035: 10010, 10011, 10012,â . | 2309.03852#30 | 2309.03852#32 | 2309.03852 | [
"2306.15595"
] |
2309.03852#32 | FLM-101B: An Open LLM and How to Train It with $100K Budget | String replacement (4-shots) is another task that examines the modelâ s capacity to edit the text precisely following human intention. We build two sub-tasks: Replace-Word and Replace-Lowercase, each of which contains 300 instances. Each instance starts with a clear instruction: for the â Replace-Wordâ task, it is like â In the following sentence, replace the specified word with the target word. word to replace: **WQHF** target word: **DFBB**â ; for the â Replace-Lowercaseâ task, it is like â For the following text, please modify all uppercase letters to lowercaseâ . The counting range and words to replace are sampled with a uniform distribution. Table 9 shows the performance of our proposed FLM-101B against GPT-3 and GLM-130B on both counting and string replacement tasks. Table 9: Performance of FLM-101B, GPT-3, and GLM-130B on rule understanding tasks. Model Average Counting Replace-Lowercase Replace-Word GPT-3 GLM-130B FLM-101B 86.03 71.49 76.42 82.43 60.81 69.59 80.67 69.67 64.00 95.00 84.00 95.67 Results. On counting task, FLM-101B achieves 69.59%, about 9 points better than GLM-130B. GPT-3 wins the first place in counting and Replace-Lowercase, and second place in Replace-Word. This is potentially because GPT-3 has the largest amount of English training data. This experiment shows that the advantages of each model are varied. Hence, in future work, rule understanding evaluation tasks should cover more scenarios. Finally, considering the cost of each model, the performance of FLM-101B is satisfactory. # 5.3 Pattern Mining Evaluation Pattern Mining test is common in IQ tests. In detail, it is the induction and deduction of the patterns emerging in a new context. In general, it is difficult even for humans and is frequently used in intelligence tests. Again, we face the problem that the same test data might have appeared in large quantities, so we also use replacement methods similar to Section 5.1 to alleviate this problem. | 2309.03852#31 | 2309.03852#33 | 2309.03852 | [
"2306.15595"
] |
2309.03852#33 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Specifically, we build a benchmark with three tasks (i.e., Head & Tail, Full Repeating, and Head Slicing) for evaluation. Head & Tail is to add a head and a tail to the given input, which should be exactly the same as the ones in the given examples. Regarding Full Repeating, the input sequence should be fully repeated once. For the Head Slicing task, the model needs to return the first fixed number of characters of the input. The number can be inferred from the preceding examples. No instruction or clue is provided except the examples. # Pattern Mining Evaluation Pattern Mining Evaluation Head & Tail Full Repeating Head Slicing Input: IHFJd Input: gEdcFa Input: EgldJ Output: JHclIHFJdFgeB Output: gEdcFagEdcFa Output: Eg Input: BEg! Input: IdeBg Input: cgBaE emp Output: JHcIBEgIFgcB Output: IdcBgldcBg Output: eg Input: JIgH Input: dHgFa Input: BoJ Output: JHclJIgHFgcB Output: dHgFadHgFa Output: Be Prompt Input: BEH Input: EgBJ Input: gHdEla Output: Output: Output: Figure 4: Examples of pattern mining evaluation. Figure 4 shows examples of these tasks. We sample the input strings, heads, and tails from a uniform distribution. These tasks are actually the â alphabeticalâ versions of the list_functions sub-task of Big-Bench [53]. The original numerical version is so simple that most existing LLMs could achieve 90%+ accuracy. To improve the distinctiveness, we replace the numbers with characters. All these tasks require the model to discover the behavior patterns inside the given examples. Each task is 5-shot and contains 100 instances. Table 10 lists the experimental results of our proposed FLM-101B against GPT-3 and GLM-130B on pattern mining tasks. 12 # Technical Report of FLM-101B 5 EVALUATIONS INSPIRED BY IQ TESTS Table 10: Performance of FLM-101B, GPT-3, and GLM-130B on pattern mining tasks. | 2309.03852#32 | 2309.03852#34 | 2309.03852 | [
"2306.15595"
] |
2309.03852#34 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Model Average Head & Tail Full Repeating Head Slicing GPT-3 GLM-130B FLM-101B 70.00 53.00 64.67 61.00 38.00 52.00 92.00 70.00 79.00 57.00 51.00 63.00 Results. On all three tasks, FLM-101B outperforms GLM-130B by a large margin. For the head & tail and full repeating tasks, FLM-101B is a few points behind GPT-3, but outperforms the latter on the head slicing task. Considering the computational cost, FLM-101B exhibits noticeable abilities in this area. # 5.4 Anti-interference Evaluation Anti-interference capability is critical for finding and utilizing information that is truly related to a specific goal, in an unseen and noisy context (Figure 5). We believe that in addition to generalization, anti-interference is also one of the important principles of AGI. For example, many LLMs will babble when given noisy cues. Another famous hard problem, the cocktail party problem in speech recognition [38], also suggests the importance of the anti-interference ability of intelligent agents. To this end, we conduct this anti-interference evaluation. Figure 5 shows two typical examples of this test. # Anti-interference Evaluation # Multiple Key Retrival There is an important info hidden inside a lot of irrelevant text. Find it and memorize them. | will quiz you about the important information there. | 2309.03852#33 | 2309.03852#35 | 2309.03852 | [
"2306.15595"
] |
2309.03852#35 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Here we go. There and back again. Here we go. There and back again. Pass key 1 is 4°(_8bLIB6. Remember it. I4kh-DMSB8y is pass key 2. Here we go. There and back again. Here we go. There and back again. The pass key 1 | told you was Supporting Facts Daniel went back to the office. Daniel travelled to the bathroom Q: Where is Daniel? A: bathroom Sandra journeyed to the kitchen. Daniel journeyed to the bathroom. Q: | 2309.03852#34 | 2309.03852#36 | 2309.03852 | [
"2306.15595"
] |
2309.03852#36 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Where is Sandra? Aâ kitchen Daniel travelled to the hallway. John moved to the office. John went to the bathroom. John travelled to the office. Q: Where is Daniel? A: hallway Examples Daniel went back to the hallway. Daniel travelled to the garden. Sandra went to the office. Sandra journeyed to the kitchen. Daniel went back to the hallway. Daniel travelled to the garden. Sandra went to the office. Sandra journeyed to the kitchen. Q: Where is Daniel? # Prompt # A # Figure 5: Examples of anti-interference evaluation. Selected Tasks and Data Collection. We conduct anti-interference evaluation in three task types: multiple key retrievals, single supporting fact tracking, and two supporting facts tracking. Multiple key retrieval is a kind of puzzle that hides some important information (referred to as keys) inside a lot of irrelevant text. If the anti-interference ability of LLMs is not good enough, they will output the wrong or even meaningless words. Even if LLMs pass the first challenge, they may still fail due to multiple relevant noises. | 2309.03852#35 | 2309.03852#37 | 2309.03852 | [
"2306.15595"
] |
2309.03852#37 | FLM-101B: An Open LLM and How to Train It with $100K Budget | We collect a multiple key retrieval dataset in similar formats as those in [7] with at most 3 keys in each instance, exemplified in Figure 5. The single supporting fact tracking and two supporting facts tracking tasks test whether a model can find the chain of supporting facts to answer a question correctly, which is hidden inside a set of irrelevant statements. There are two sub-tasks in the babi-20 [72] benchmark (qa1 and qa2 12) that are aligned with this setting. | 2309.03852#36 | 2309.03852#38 | 2309.03852 | [
"2306.15595"
] |
2309.03852#38 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Thus, we 12We drop qa3 due to the long context length and extraordinary difficulty for all the models 13 # Technical Report of FLM-101B 6 RELATED WORK directly modify them in a generative format with 3 shots. We randomly sampled 300 questions for each of these three tasks. Table 11 shows the evaluation results on anti-interference. Table 11: Performance of FLM-101B, GPT-3, and GLM-130B on anti-interference evaluation. Model Average Multiple Key Retrieval Single Supporting Fact Two Supporting Facts GPT-3 GLM-130B FLM-101B 70.11 53.56 60.11 92.67 77.67 89.00 78.33 56.33 59.00 39.33 26.67 32.33 Results. Among all the baselines for this evaluation, FLM-101B achieves the second-best passing rates of 89.00%, 59.00%, and 32.33%, respectively, which is an advantage of about 11%, 3%, and 6% compared to GLM-130B. Considering the computational cost, FLM-101B delivers exciting performance. In conclusion, on our four additional evaluations inspired by the IQ tests, FLM-101B outperforms GLM-130B and obtains competitive results compared to GPT-3 in some tasks with much lower costs. Except for the impacts of training data, the superiority may be owed to a story that in the growth strategy, the smaller models in early stages refine a more efficient searching space, which keeps taking effect when the model grows larger with increased generalization ability. # 6 Related Work Scaling Up Language Models to 100B. The burgeoning advancements in hardware and computa- tional techniques in recent years [47; 52] have laid a robust groundwork for the expansion of language models. The benefits of scaling up LLMs include discernible advantages in language perplexity supported by studies on scaling laws [23; 18; 19; 77], as well as the emergent cognitive competencies in models [69; 4]. In the realm of 100+ billion parameters, examples of closed-source pre-trained LLMs include GPT-3 [3], Gopher [42], and Palm [1]. | 2309.03852#37 | 2309.03852#39 | 2309.03852 | [
"2306.15595"
] |
2309.03852#39 | FLM-101B: An Open LLM and How to Train It with $100K Budget | For closed-source models trained on Chinese data, notable mentions are Ernie 3.0 [63], Pangu-Σ [48], and InternLM [57]. Turning our attention to open-source variants, OPT [81] and BLOOM [49] are among the counterparts to GPT-3; the Llama [58; 59] series strategically operates on a slightly reduced scale (approximately 70B parameters) but amplifies the data to 2T. GLM-130B [80] is an open-source bilingual model with decent performance in both Chinese and English tasks. Nevertheless, the development trajectory and cost of GLM-130B remain largely inaccessible to many academic and industrial entities. FLM-101B is an exemplary paradigm for achieving comparable performance with a relatively small $100K budget. It is our aspiration that this model serves as a catalyst, expediting research advancements and making them more economically feasible in this domain. Aligning with Humans. Despite the evidence that foundation LLMs present reasoning abilities in zero/few-shot learning and chain-of-thought prompting [3; 70], further refinement is needed to enhance their abilities to follow instructions [68] and align with human preferences [37; 36; 13; 2]. Supervised fine-tuning releases the potential of LLMs to imitate the instruction-following formats and provide human-like responses in dialogical and problem-solving contexts [66; 73; 34; 26]. Meanwhile, policy optimization methods [50; 43] lead LLMs to generate responses that maximize rewards congruent with human preferences, e.g., being helpful and harmless [12]. On the other hand, although these post-training techniques have proven effective and successful in industrial applications, the scaling laws regarding model sizes persist even after alignment with humans: larger models provide more factual and reasonable responses [16], as well as being better calibrated with their confidence probabilities [22]. We hereby release FLM-101B as a large foundation model, making it an accessible starting point for subsequent alignment studies. LLM Evaluation. Widely-used approaches to evaluate LLMs include natural language processing benchmarks [74; 61], commonsense knowledge benchmarks [9; 79; 27], and professional knowledge benchmarks [17; 20]. For chatbots after fine-tuning, automatic and semi-automatic playgrounds are developed to evaluate their human alignment abilities [83]. Although knowledge-oriented ability is | 2309.03852#38 | 2309.03852#40 | 2309.03852 | [
"2306.15595"
] |
2309.03852#40 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 14 # Technical Report of FLM-101B REFERENCES important, the results can be substantially impacted by training data and domains. To measure other classes of abilities, existing research like Big-Bench [53] and babi-20 [72] include some sub-tasks relevant to IQ tests, while others still depend more on NLP and knowledge. In this work, we add additional ranges of evaluation in the IQ-test paradigms by re-organizing existing datasets as well as creating new ones where proper. Model Growth A line of existing work studies the progressive expansion of structures in training Transformer-like models [14; 51; 15; 6; 39; 62; 78]. To our knowledge, FLM-101B presents the first attempt to use a growth strategy to train LLMs in the 100B+ scale. For a more comprehensive summary, please refer to [78]. # 7 Conclusions and Future Work In this paper, we introduce FLM-101B, an open-source LLM that is successfully trained from scratch within a $100,000 budget. The key idea of reducing the training cost of FLM-101B is to utilize the growth strategy to break through the fixed number of model parameters. To fairly evaluate LLMs, we conduct a set of evaluations inspired by IQ tests. We believe that along this pathway, better IQ evaluation methods will continue to emerge in future studies. Experimental results show that FLM-101B outperforms strong baseline models under the same computational cost. The power of LLMs is very exciting. We believe that LLMs are one of the important possible technical paths to AGI. For the sustainable development of LLMs, we believe that it may be an effective path to construct a basic LLM with strong reasoning capabilities but not a large amount of knowledge (for cost saving), and then expand the knowledge of the LLM in different domains to better support applications. Besides, our exploration on the growth strategy as well as training stability would potentially be beneficial for future attempts of further scaling up LLMs, e.g., beyond 1T parameters. | 2309.03852#39 | 2309.03852#41 | 2309.03852 | [
"2306.15595"
] |
2309.03852#41 | FLM-101B: An Open LLM and How to Train It with $100K Budget | # Acknowledgments This work is supported by the National Key R&D Program of China (2022ZD0116300) and the National Science Foundation of China (NSFC No. 62106249). We would like to thank Hanxiao Qu, Yan Tian, Xigang Cao, Xiaolong Zhang, Kailong Xie and Conghui Guo for their help on computational resources, Quanyue Ma, Hanyu Zhao, Yihui Guo and Jiahong Leng for their help on data, and all other colleaguesâ strong supports for this project. # References [1] Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernández à brego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Dà az, Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and et al. | 2309.03852#40 | 2309.03852#42 | 2309.03852 | [
"2306.15595"
] |
2309.03852#42 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Palm 2 technical report. CoRR, abs/2305.10403, 2023. [2] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and Jared Kaplan. | 2309.03852#41 | 2309.03852#43 | 2309.03852 | [
"2306.15595"
] |
2309.03852#43 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR, abs/2204.05862, 2022. 15 Technical Report of FLM-101B REFERENCES [3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â | 2309.03852#42 | 2309.03852#44 | 2309.03852 | [
"2306.15595"
] |
2309.03852#44 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 1901, 2020. [4] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712, 2023. [5] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. A survey on evaluation of large language models. CoRR, abs/2307.03109, 2023. [6] Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, and Qun Liu. bert2bert: Towards reusable pretrained language models. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2134â 2148. Association for Computational Linguistics, 2022. [7] Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023. [8] Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. | 2309.03852#43 | 2309.03852#45 | 2309.03852 | [
"2306.15595"
] |
2309.03852#45 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [9] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457, 2018. [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171â 4186. Association for Computational Linguistics, 2019. Interactive information extraction by semantic information graph. In Luc De Raedt, editor, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4100â 4106. ijcai.org, 2022. [12] Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. CoRR, abs/2209.07858, 2022. | 2309.03852#44 | 2309.03852#46 | 2309.03852 | [
"2306.15595"
] |
2309.03852#46 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [13] Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin J. Chadwick, Phoebe Thacker, Lucy Campbell- Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Sona Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and Geoffrey Irving. Improving alignment of dialogue agents via targeted human judgements. CoRR, abs/2209.14375, 2022. | 2309.03852#45 | 2309.03852#47 | 2309.03852 | [
"2306.15595"
] |
2309.03852#47 | FLM-101B: An Open LLM and How to Train It with $100K Budget | 16 Technical Report of FLM-101B REFERENCES [14] Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. Efficient training In International conference on machine learning, pages of bert by progressively stacking. 2337â 2346. PMLR, 2019. [15] Xiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen, and Jiawei Han. On the trans- former growth for progressive bert training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5174â 5180, 2021. [16] Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. The false promise of imitating proprietary llms. CoRR, abs/2305.15717, 2023. | 2309.03852#46 | 2309.03852#48 | 2309.03852 | [
"2306.15595"
] |
2309.03852#48 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [17] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. [18] Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. | 2309.03852#47 | 2309.03852#49 | 2309.03852 | [
"2306.15595"
] |
2309.03852#49 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Scaling laws for autoregressive generative modeling. CoRR, abs/2010.14701, 2020. [19] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Oriol Vinyals, Jack W. Rae, and Laurent Sifre. | 2309.03852#48 | 2309.03852#50 | 2309.03852 | [
"2306.15595"
] |
2309.03852#50 | FLM-101B: An Open LLM and How to Train It with $100K Budget | An empirical analysis of compute-optimal large language model training. In NeurIPS, 2022. [20] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. C- eval: A multi-level multi-discipline chinese evaluation suite for foundation models. CoRR, abs/2305.08322, 2023. [21] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. | 2309.03852#49 | 2309.03852#51 | 2309.03852 | [
"2306.15595"
] |
2309.03852#51 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics, 8:64â 77, 2020. [22] Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. | 2309.03852#50 | 2309.03852#52 | 2309.03852 | [
"2306.15595"
] |
2309.03852#52 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Language models (mostly) know what they know. CoRR, abs/2207.05221, 2022. [23] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. CoRR, abs/2001.08361, 2020. [24] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Moham- mad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022. [25] Xiang Li, Xin Jiang, Xuying Meng, Aixin Sun, and Yequan Wang. | 2309.03852#51 | 2309.03852#53 | 2309.03852 | [
"2306.15595"
] |
2309.03852#53 | FLM-101B: An Open LLM and How to Train It with $100K Budget | Freelm: Fine-tuning-free language model. CoRR, abs/2305.01616, 2023. [26] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâ s verify step by step. CoRR, abs/2305.20050, 2023. 17 # Technical Report of FLM-101B REFERENCES | 2309.03852#52 | 2309.03852#54 | 2309.03852 | [
"2306.15595"
] |
2309.03852#54 | FLM-101B: An Open LLM and How to Train It with $100K Budget | [27] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3214â 3252. Association for Computational Linguistics, 2022. [28] Etai Littwin and Greg Yang. Adaptive optimization in the â -width limit. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [29] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: | 2309.03852#53 | 2309.03852#55 | 2309.03852 | [
"2306.15595"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.