id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.12503#24
CGMI: Configurable General Multi-Agent Interaction Framework
, and reiterated the topic in her subsequent planning. Following the actions of encouragement, Mrs. Smithâ s reflective records recognized her efforts in affirming and uplifting students. through reflection on Course-ONE, Mrs Smith found that Emily exhibited anxiety when faced with mathematical chal- lenges. This insight directly influenced Mrs.Smith reassur- ing statement to Emily in Course-TWO: â Iâ m pleased to see youâ ve overcome your apprehension towards mathematics.â The effect of tree-structured persona model. To discern whether agents with varied personality traits exhibit distin- guishable behaviors during interactions, we executed a com- parative study depicted in Figure 5. One lesson involved per- sonality allocation, detection, and maintenance, whereas the other lacked any defined agent personalities. In the absence of assigned traits, there was a notable uniformity in the ex- pressions of five students, often resorting to statements like, â
2308.12503#23
2308.12503#25
2308.12503
[ "2302.01560" ]
2308.12503#25
CGMI: Configurable General Multi-Agent Interaction Framework
Iâ m excited...â . In contrast, once unique personality traits were allocated, their expressions became more nuanced and aligned with their respective personas. For instance, the out- going Ryan would suggest a â discussion with classmatesâ , while the industrious Ying Zheng would exude a â passion for learningâ . Furthermore, on the right side of Figure 5, the statements made by the student Emily throughout the class are dis- played. Judging from the records of her remarks, the Emily Agent has demonstrated a consistent persona, interacting with teachers and classmates based on the previously es- tablished persona. In detail, she remarked, â Iâ m consider- ably anxious about this quadratic equations segment.â
2308.12503#24
2308.12503#26
2308.12503
[ "2302.01560" ]
2308.12503#26
CGMI: Configurable General Multi-Agent Interaction Framework
at the start of the class. In the middle part of the course, she still showed her unfamiliarity and lack of confidence in the current knowledge in the interaction, expressing like, â Iâ m not well-versed with quadratic equations, yet Iâ m keen on learning and exploring various aspects...â , and â Being an average student, I might require a while to fully comprehend quadratic equationsâ . (2) Between lessons: Across different courses, the pro- posed cognitive structure is still valid. It plays a crucial role in refining Mrs.
2308.12503#25
2308.12503#27
2308.12503
[ "2302.01560" ]
2308.12503#27
CGMI: Configurable General Multi-Agent Interaction Framework
Smithâ s teaching focus, deepening un- derstanding and adapting teaching methods. For example, By imbuing agents with human-like qualities, they can adeptly distill insights from evolving scenarios and ex- hibit individualized responses. In addition, it also can make agents recalibrate actions based on accumulated knowledge and abilities. This significantly augments agentsâ adaptive Question: Can anyone tell me the general form of a quadratic function? 1 eee ee eee 1 The number of hands â #-Willingness 1 raised in C2 3 5) 4, 2, 4 110 Random ©. 828 8 Goa) h A N John Emily Ryan Samantha Ying Zheng Willingness: Random: 2A ee John Emily Ryan Samantha Ying Zheng 9 8 7 6 5 4 3 1 John Emly Ryan Samantha Es Role-Set: iaieiel aaa John(Athletic Star): Extroverted, Sociable, Poor concentration Emily(Art Prodigy): Artistic , Expressive, Occasionally motivated Ryan(Social Butterfly): Outgoing, Charismatic, Occasionally motivated Samantha(Contemplator): Introverted, Independent, Quick learner Ying Zheng(Academic Enthusiast): Diligent, Focused, Quick learner
2308.12503#26
2308.12503#28
2308.12503
[ "2302.01560" ]
2308.12503#28
CGMI: Configurable General Multi-Agent Interaction Framework
Figure 6: The influence of personal traits on agent expres- sion. capabilities in multifaceted environments. Concurrently, the tree-structured character model introduced in this study ef- fectively and efficiently captures and retains the personal- ized data of agents. Quantitative Analysis of Interaction Logic Based on the â classroom teachingâ scenario restored by CGMI, this paper compares the rationality of different in- teraction logics under the same question. Analysis of willingness to speak. As shown in the Fig- ure 6, when the teacher posed the question to all students: â
2308.12503#27
2308.12503#29
2308.12503
[ "2302.01560" ]
2308.12503#29
CGMI: Configurable General Multi-Agent Interaction Framework
Can anyone tell me the general form of a quadratic func- tion?â , the outcomes differed between the answer willing- ness judgment agent and random selection methods. The former showed the studentsâ willingness to answer intensity: John: 3, Emily: 5, Ryan: 4, Samantha: 2, Ying Zheng: 4. No- tably, the studentsâ willingness strength is highly consistent with their character traits. For instance, the expressive Emily exhibited high willingness to answer, while the introverted Samantha showed less. The random selection method, how- ever, produced different results. The discrepancy between the two methods is not coinci- dental. We recorded the number of students recommended by the two different methods to answer when the teacher posed questions to the entire class during a complete lesson. From the Figure 6, it can be seen that the answer willingness judgment agent, considering factors like studentsâ person- alities, classroom dynamics, and their grasp of the subject, recommended John 4 times, Emily 9 times, Ryan 6 times, Samantha 1 time, and Ying Zheng 8 times. However, with random selection, the results were John 7 times, Emily 3 times, Ryan 4 times, Samantha 6 times, and Ying Zheng 8 times. The expressive Emily only volunteered to answer 3 times, significantly undermining the rationality of the inter- action process between the teacher and students in the virtual scenario. The effectiveness of questioning. In addition to pos- ing questions to all students, teachers also selectively direct questions to specific students. This selection is influenced by Teaching Plan: Based on the students' personalities: - Ying Zheng (Academic Enthusiast): Challenge Ying Zheng with advanced problem-solving tasks and encourage him to explore additional methods for solving Class process:
2308.12503#28
2308.12503#30
2308.12503
[ "2302.01560" ]
2308.12503#30
CGMI: Configurable General Multi-Agent Interaction Framework
Mrs. Smith: Next, we will learn about the different methods of solving quadratic equations... Ying Zheng! Exploring different methods of solving... Ying Zheng: By trying out various approaches, we can... Figure 7: The influence of personal traits on agent expres- sion. two aspects: (1) some teaching plans targeting particular stu- dents and (2) itâ s influenced by the teacherâ s analysis of the studentâ s status and classroom dynamics during the teaching process. As shown in Figure 7, the teaching plan specifies that the teacher can encourage Ying Zheng to explore differ- ent solutions. As observed in the subsequent teaching pro- cess, the teacher aptly integrated this instructional arrange- ment during the lecture and specifically asked Ying Zheng to explore, leading to the next phase of instruction. In summary, the flexible interaction logic setting ensures that the interaction process among multiple agents is no longer a random choice without considering the actual situ- ation and role settings, nor a process where every role needs to be expressed. This introduces more possibilities for vir- tual scenarios.
2308.12503#29
2308.12503#31
2308.12503
[ "2302.01560" ]
2308.12503#31
CGMI: Configurable General Multi-Agent Interaction Framework
# Conclusion This paper introduces a multi-agent interaction framework (CGMI) that supports personalized configurations, enabling multiple agents to engage in anthropomorphic interactions and collaborations. It also can simulate domain-specific social phenomena. We designed a cognitive architecture equipped with domain skill library. It allows agents to com- bine domain knowledge for reflection and planning, and condense the working memory into declarative and proce- dural memories. With the assistance of general agents, the authenticity of scenarios can be further enhanced. Moreover, we employed a virtual â classroom teachingâ scenario to sim- ulate the teaching process between teachers and students, and conducted comparative analysis of their interaction con- tent and logic, verifying the effectiveness of CGMI. In the future, we hope that the social scenarios simulated by multiple agents will not only provide users with valuable social experimental data, aiding the development of large models, but also support industrial applications, such as as- sisting teaching and gamified teaching.
2308.12503#30
2308.12503#32
2308.12503
[ "2302.01560" ]
2308.12503#32
CGMI: Configurable General Multi-Agent Interaction Framework
References Aher, G. V.; Arriaga, R. I.; and Kalai, A. T. 2023. Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies. In Krause, A.; Brun- skill, E.; Cho, K.; Engelhardt, B.; Sabato, S.; and Scarlett, J., eds., Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, 337â
2308.12503#31
2308.12503#33
2308.12503
[ "2302.01560" ]
2308.12503#33
CGMI: Configurable General Multi-Agent Interaction Framework
371. PMLR. Alexandru, A.; Tirziu, E.; Tudora, E.; and Bica, O. 2015. Enhanced education by using intelligent agents in multi- agent adaptive e-learning systems. Studies in Informatics and Control, 24(1): 13â 22. Anderson; and R, J. 1983. A spreading activation theory of memory. Journal of verbal learning and verbal behavior, 22(3): 261â 295.
2308.12503#32
2308.12503#34
2308.12503
[ "2302.01560" ]
2308.12503#34
CGMI: Configurable General Multi-Agent Interaction Framework
Argyle, L. P.; Busby, E. C.; Fulda, N.; Gubler, J. R.; Rytting, C.; and Wingate, D. 2023. Out of one, many: Using lan- guage models to simulate human samples. Political Analy- sis, 31(3): 337â 351. Bran, A. M.; Cox, S.; White, A. D.; and Schwaller, P. 2023. ChemCrow: Augmenting large-language models with chem- istry tools. arXiv:2304.05376.
2308.12503#33
2308.12503#35
2308.12503
[ "2302.01560" ]
2308.12503#35
CGMI: Configurable General Multi-Agent Interaction Framework
Davidsson; and Paul. 2002. Agent based social simulation: A computer science view. Journal of artificial societies and social simulation, 5(1). Grigorenko, E.; and Sternberg, R. 1993. Thinking styles in teaching inventory. unpublished test, Yale University. Jiang, H.; Zhang, X.; Cao, X.; and Kabbara, J. 2023. Person- aLLM: Investigating the Ability of GPT-3.5 to Express Per- sonality Traits and Gender Differences. arXiv:2305.02547.
2308.12503#34
2308.12503#36
2308.12503
[ "2302.01560" ]
2308.12503#36
CGMI: Configurable General Multi-Agent Interaction Framework
John, O. P.; Srivastava, S.; et al. 1999. The Big-Five trait tax- onomy: History, measurement, and theoretical perspectives. Krishna, R.; Lee, D.; Fei-Fei, L.; and Bernstein, M. S. 2022. Socially situated artificial intelligence enables learning from human interaction. Proceedings of the National Academy of Sciences, 119(39): e2115730119. Li, G.; Hammoud, H. A. A.
2308.12503#35
2308.12503#37
2308.12503
[ "2302.01560" ]
2308.12503#37
CGMI: Configurable General Multi-Agent Interaction Framework
K.; Itani, H.; Khizbullin, D.; and Ghanem, B. 2023. CAMEL: Communicative Agents for â Mindâ Exploration of Large Scale Language Model Soci- ety. arXiv:2303.17760. Mara Pudane, E. L.; and Radin, M. A. 2017. Human Emo- tional Behavior Simulation in Intelligent Agents: Processes and Architecture. Procedia Computer Science, 104: 517â 524. ICTE 2016, Riga Technical University, Latvia. Markel, J. M.; Opferman, S. G.; Landay, J. A.; and Piech, C. 2023. GPTeach: Interactive TA Training with GPT Based Students.
2308.12503#36
2308.12503#38
2308.12503
[ "2302.01560" ]
2308.12503#38
CGMI: Configurable General Multi-Agent Interaction Framework
Nair, V.; Schumacher, E.; Tso, G.; and Kannan, A. 2023. DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents. arXiv:2303.17071. OpenAI. 2022. OpenAI. Introducing chatgpt. https://openai. com/blog/chatgpt. Accessed: 2023-03-1. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Park, J. S.; Oâ Brien, J. C.; Cai, C. J.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023.
2308.12503#37
2308.12503#39
2308.12503
[ "2302.01560" ]
2308.12503#39
CGMI: Configurable General Multi-Agent Interaction Framework
Generative Agents: Interac- tive Simulacra of Human Behavior. arXiv:2304.03442. Park, J. S.; Popowski, L.; Cai, C.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2022. Social Simulacra: Creating In Populated Prototypes for Social Computing Systems. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST â
2308.12503#38
2308.12503#40
2308.12503
[ "2302.01560" ]
2308.12503#40
CGMI: Configurable General Multi-Agent Interaction Framework
22. New York, NY, USA: Association for Computing Machinery. ISBN 9781450393201. Press, O.; Zhang, M.; Min, S.; Schmidt, L.; Smith, N. A.; and Lewis, M. 2023. Measuring and Narrowing the Compo- sitionality Gap in Language Models. arXiv:2210.03350. Qian, C.; Cong, X.; Yang, C.; Chen, W.; Su, Y.; Xu, J.; Liu, Z.; and Sun, M. 2023. Communicative Agents for Software Development. arXiv:2307.07924. Qian, Q.; Huang, M.; Zhao, H.; Xu, J.; and Zhu, X. 2018. Assigning Personality/Profile to a Chatting Machine for Co- herent Conversation Generation.
2308.12503#39
2308.12503#41
2308.12503
[ "2302.01560" ]
2308.12503#41
CGMI: Configurable General Multi-Agent Interaction Framework
In Ijcai, 4279â 4285. Soloman, B. A.; and Felder, R. M. 2005. Index of learning styles questionnaire. NC State University. Available online at: http://www. engr. ncsu. edu/learningstyles/ilsweb. html (last visited on 14.05. 2010), 70. Wang, L.; Xu, W.; Lan, Y.; Hu, Z.; Lan, Y.; Lee, R. K.-W.; and Lim, E.-P. 2023a.
2308.12503#40
2308.12503#42
2308.12503
[ "2302.01560" ]
2308.12503#42
CGMI: Configurable General Multi-Agent Interaction Framework
Plan-and-Solve Prompting: Improv- ing Zero-Shot Chain-of-Thought Reasoning by Large Lan- guage Models. arXiv:2305.04091. Wang, Z.; Cai, S.; Liu, A.; Ma, X.; and Liang, Y. 2023b. De- scribe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents. arXiv:2302.01560.
2308.12503#41
2308.12503#43
2308.12503
[ "2302.01560" ]
2308.12503#43
CGMI: Configurable General Multi-Agent Interaction Framework
Wang, Z.; Mao, S.; Wu, W.; Ge, T.; Wei, F.; and Ji, H. 2023c. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self- Collaboration. arXiv:2307.05300. Weng and Lilian. 2023. LLM-powered Autonomous Agents. https://lilianweng.github.io/posts/2023-06-23-agent/. Ac- cessed: 2023-06-23. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023. ReAct: Synergizing Reasoning and Acting in Language Models. arXiv:2210.03629.
2308.12503#42
2308.12503#44
2308.12503
[ "2302.01560" ]
2308.12503#44
CGMI: Configurable General Multi-Agent Interaction Framework
â Ay Career: Student Y Name: John Description: John is a Athletic Star student who physical fitness and team spirit allow him to excel in various sports activities. The resilience and determination he demonstrate when faced with challenges are also significant strengths. He might neglect academic learning and artistic development because he devote most of his time and energy to sports activities. He might also rely too heavily on sports, overlooking the need for a balanced physical and mental well-being, Personality (Big Five Personality): ([21, 5, 5, 3, 5, 3], [18, 4, 3, 3, 4, 4], [18, 4,4, 3, 4,3], [16, 4, 3, 3, 3, 3}, [16, 3, 3, 3,4, 3] Learning Style (Solomon's Learning Styles): [[Active-3, a, a, a, a, a, b, b, a, a, b, b], [Sensory-10, a, a, a, a,b, a, a, a, a, a, a}, [Visual-Il, a, a, a, a, a,a,a, a, a,a, a], [Sequential 10, a,a,a,a,a,a,b,a,a,a, al] Appendix The appendix presents the character settings for each char- acter, a tree-structured learning style scale, and a teaching style scale. Role Set In this work, the initialization of role agents is mainly carried out from the perspectives of the career, name, basic informa- tion, personalities, and teaching or learning styles. Figure 8 shows Teacher Mrs Smithâ s character settings. Figures 9, 10, 11, 12, and 13 show the character settings of students Ryan, John, Emily, Samantha, and Ying Zheng, respectively. Sternberg Thinking Styles in Teaching Mrs.
2308.12503#43
2308.12503#45
2308.12503
[ "2302.01560" ]
2308.12503#45
CGMI: Configurable General Multi-Agent Interaction Framework
Smithâ s teaching style can be described by Sternberg Thinking Styles in Teaching Inventory with a tree-structured format (Figure 14). Each Level-2 node has its score, rep- resenting the degree of match between the description pro- vided and the actual teaching style, with a maximum of 7 and a minimum of 1. Each Level-1 node also has its cor- responding score, which is the sum of the scores of all its child nodes. The higher the value, the higher the degree of matching.
2308.12503#44
2308.12503#46
2308.12503
[ "2302.01560" ]
2308.12503#46
CGMI: Configurable General Multi-Agent Interaction Framework
Solomonâ s Learning Styles Students learning styles can be described by Solomonâ s Learning Styles Inventory with a tree-structured format (Fig- ure 15). Each Level-1 node has its type to represent your type in four different dimensions. When selecting 11 sub- nodes, a is selected more times than b, then the category represented is the former in the description, otherwise, it is the latter. Each Level-2 node has its description and choice to indicate your selection for the current evaluation question. Figure 10:
2308.12503#45
2308.12503#47
2308.12503
[ "2302.01560" ]
2308.12503#47
CGMI: Configurable General Multi-Agent Interaction Framework
Character setting for John. Figure 10: Character setting for John. Figure 11: Character setting for Emily. Appendix The appendix presents the character settings for each char- acter, a tree-structured learning style scale, and a teaching style scale. Role Set In this work, the initialization of role agents is mainly carried out from the perspectives of the career, name, basic informa- tion, personalities, and teaching or learning styles. Figure 8 shows Teacher Mrs Smithâ s character settings. Figures 9, 10, 11, 12, and 13 show the character settings of students Ryan, John, Emily, Samantha, and Ying Zheng, respectively. Sternberg Thinking Styles in Teaching Mrs.
2308.12503#46
2308.12503#48
2308.12503
[ "2302.01560" ]
2308.12503#48
CGMI: Configurable General Multi-Agent Interaction Framework
Smithâ s teaching style can be described by Sternberg Thinking Styles in Teaching Inventory with a tree-structured format (Figure 14). Each Level-2 node has its score, rep- resenting the degree of match between the description pro- vided and the actual teaching style, with a maximum of 7 and a minimum of 1. Each Level-1 node also has its cor- responding score, which is the sum of the scores of all its child nodes. The higher the value, the higher the degree of matching.
2308.12503#47
2308.12503#49
2308.12503
[ "2302.01560" ]
2308.12503#49
CGMI: Configurable General Multi-Agent Interaction Framework
Solomonâ s Learning Styles Students learning styles can be described by Solomonâ s Learning Styles Inventory with a tree-structured format (Fig- ure 15). Each Level-1 node has its type to represent your type in four different dimensions. When selecting 11 sub- nodes, a is selected more times than b, then the category represented is the former in the description, otherwise, it is the latter. Each Level-2 node has its description and choice to indicate your selection for the current evaluation question. Figure 8:
2308.12503#48
2308.12503#50
2308.12503
[ "2302.01560" ]
2308.12503#50
CGMI: Configurable General Multi-Agent Interaction Framework
Character setting for Mrs. Smith. Figure 10: Character setting for John. Figure 11: Character setting for Emily. Figure 12: Character setting for Samantha. Appendix The appendix presents the character settings for each char- acter, a tree-structured learning style scale, and a teaching style scale. Figure 13: Character setting for Mrs. Smith. # Figure 9: Character setting for Ryan. â Descriptionâ : I like to have students design some discussion projects that they are interested in. â Scoreâ : [] â Descriptionâ :
2308.12503#49
2308.12503#51
2308.12503
[ "2302.01560" ]
2308.12503#51
CGMI: Configurable General Multi-Agent Interaction Framework
I want students to learn how to solve problems on their own. â Scoreâ : [] â Descriptionâ : I will choose course content that allows students to learn in their own way. â Scoreâ :[] â Descriptionâ : Legislative â Scoreâ :[] â Descriptionâ : When assigning a written assignment, I let students come up with their own topics. â Scoreâ : [] â Descriptionâ : In my class, I try my best to stimulate studentsâ creativity. â Scoreâ :[] â Descriptionâ :
2308.12503#50
2308.12503#52
2308.12503
[ "2302.01560" ]
2308.12503#52
CGMI: Configurable General Multi-Agent Interaction Framework
I teach my students to understand the importance of creativity in every activity, such as in personal life, learning, and work. â Scoreâ : [| â Descriptionâ : I often assign some homework that requires students to complete independently. â Scoreâ : |] â Descriptionâ : Good students always pay attention to listen to the teacher's instructions. â Scoreâ : [] â Descriptionâ : Students should do what teachers ask them to do. â Scoreâ :[] â Descriptionâ : I like to teach according to the instructions in the textbook manual. â Scoreâ :[] â Scoreâ :[] â Descriptionâ :
2308.12503#51
2308.12503#53
2308.12503
[ "2302.01560" ]
2308.12503#53
CGMI: Configurable General Multi-Agent Interaction Framework
I prefer having students do homework on assigned topics rather than letting them choose topics freely. â Scoreâ : [] â Descriptionâ : I think textbooks should include specific steps on how to teach each activity. â Scoreâ :[] â Descriptionâ : I think it's equally important for teachers to let administrators know about teaching as the teaching itself. â Scoreâ : [] â Descriptionâ : Students should follow the teacher's steps closely when learning. â Scoreâ : [] â Descriptionâ : Teachers should continuously provide feedback on studentsâ learning progress. â Scoreâ :[] â Descriptionâ : In schools, the best way for teachersâ professional growth is to provide opportunities for teachers to observe each other's classes and have time to evaluate each other's teaching. â
2308.12503#52
2308.12503#54
2308.12503
[ "2302.01560" ]
2308.12503#54
CGMI: Configurable General Multi-Agent Interaction Framework
Scoreâ :[] â Descriptionâ : Students need to learn to critically evaluate and criticize the materials they read. â Scoreâ : [] â Descriptionâ : Judicial â Scoreâ :[] â Descriptionâ : Teachers need to do a lot of self-reflection, analysis, and evaluation of their own work. â Scoreâ :[] â Descriptionâ : Understanding concepts is more important than simply rote learning or teaching methods to remember concepts. â Scoreâ : [] â Descriptionâ : I think that for most materials students read, what they get out of it is quite superficial. â Scoreâ : [] â Descriptionâ :
2308.12503#53
2308.12503#55
2308.12503
[ "2302.01560" ]
2308.12503#55
CGMI: Configurable General Multi-Agent Interaction Framework
One of the most important jobs of teachers is to assess studentsâ learning status. â Scoreâ :[] â Descriptionâ : Teachers must enable students to understand the conceptual knowledge related to the course, not just provide some facts. â Scoreâ :[] â Descriptionâ : I like to focus on the general concepts of the subjects I teach, rather than list a lot of factual details. â Scoreâ :[] â Descriptionâ : When I prepare for lessons, I would prepare the main points to teach, leaving the details for students to find out by themselves. â Scoreâ : [ ] â Descriptionâ : Global â Scoreâ :[] â Descriptionâ :
2308.12503#54
2308.12503#56
2308.12503
[ "2302.01560" ]
2308.12503#56
CGMI: Configurable General Multi-Agent Interaction Framework
I like to teach students a method that can be used to solve various problems. â Scoreâ : [] â Descriptionâ :I prefer to explain to students the scope and conditions of applying a problem, rather than explain the details. â Scoreâ : [|] â Descriptionâ : I think students should learn how to understand some key issues and the context these issues exist in. â Scoreâ : [] â Descriptionâ : The main task of teachers is to provide students with a way of thinking that can be universally applied in various aspects. â
2308.12503#55
2308.12503#57
2308.12503
[ "2302.01560" ]
2308.12503#57
CGMI: Configurable General Multi-Agent Interaction Framework
Scoreâ : [] â Descriptionâ : Teachers must provide students with a lot of concrete and detailed course materials. â Scoreâ :[] â Descriptionâ : I like to ask questions that require students to answer with accurate, precise and very detailed knowledge. â Scoreâ : [ â Descriptionâ : For students, the most important thing is to know a lot of facts and details, then they can learn how to analyze and synthesize. â Scoreâ : â Scoreâ :[] â Descriptionâ :
2308.12503#56
2308.12503#58
2308.12503
[ "2302.01560" ]
2308.12503#58
CGMI: Configurable General Multi-Agent Interaction Framework
I think the focus of teaching is to master factual details. â Scoreâ :[] â Descriptionâ : I like to explain specific steps and detailed things to students. â Scoreâ : [|] â Descriptionâ : Teaching is imparting facts and enabling students to obtain a lot of useful information. â Scoreâ :[] â Descriptionâ : I prefer discussions or learning around concrete issues that allow me to focus on a large number of details. â Scoreâ : [] â Descriptionâ : Teachers must pay constant attention to curriculum and teaching reforms to understand the direction of education. â Scoreâ : |] â Descriptionâ :
2308.12503#57
2308.12503#59
2308.12503
[ "2302.01560" ]
2308.12503#59
CGMI: Configurable General Multi-Agent Interaction Framework
Each year I choose some new textbooks or reference materials to supplement my teaching content. â Scoreâ :[] â Descriptionâ : Teachers and students must abandon old ways of thinking and learn new methods to face everything. â Scoreâ :[] â Scoreâ :[] â Descriptionâ : Teachers should raise questions and tell students about the contradictions and dilemmas they face in solving problems. â Scoreâ :[] â Descriptionâ : I like when students have different perspectives on the views I raise. â Scoreâ :[] â Descriptionâ :
2308.12503#58
2308.12503#60
2308.12503
[ "2302.01560" ]
2308.12503#60
CGMI: Configurable General Multi-Agent Interaction Framework
Teachers should see teaching or learning as an ongoing process of pedagogical innovation, problem-solving, and meeting challenges. â Scoreâ : [| â Descriptionâ : The role of teachers is to enable students to acquire knowledge through experimentation or evidencing approaches in the classroom. â Scoreâ :[] â Descriptionâ : I think textbooks selected by the school or administrative department are the best teaching materials. â Scoreâ :[] â Descriptionâ : Students should adopt the perspectives that teachers think are correct. â Scoreâ :[] â Descriptionâ : I like to follow some ready-made rules and procedures when teaching courses. â Scoreâ : [] â Scoreâ :[] â Descriptionâ :
2308.12503#59
2308.12503#61
2308.12503
[ "2302.01560" ]
2308.12503#61
CGMI: Configurable General Multi-Agent Interaction Framework
I prefer teaching the same subject and the same grade every year. â Scoreâ :[] â Descriptionâ : In my work, I like to use some topics, tests, and teaching methods that have proven successful. â Scoreâ : [] â Descriptionâ : We should measure a teacher's performance based on classroom order, behavioral requirements for students, students' level of courtesy, and their ability to give correct answers to questions. â Scoreâ :[] â Descriptionâ : I agree with teachers being more strict on classroom discipline. â Scoreâ
2308.12503#60
2308.12503#62
2308.12503
[ "2302.01560" ]
2308.12503#62
CGMI: Configurable General Multi-Agent Interaction Framework
:[] â Descriptionâ : I like to have students design some discussion projects that they are interested in. â Scoreâ : [] â Descriptionâ : I want students to learn how to solve problems on their own. â Scoreâ : [] â Descriptionâ : I will choose course content that allows students to learn in their own way. â Scoreâ :[] â Descriptionâ : Legislative â Scoreâ :[] â Descriptionâ : When assigning a written assignment, I let students come up with their own topics. â Scoreâ : [] â Descriptionâ : In my class, I try my best to stimulate studentsâ creativity. â Scoreâ :[] â Descriptionâ :
2308.12503#61
2308.12503#63
2308.12503
[ "2302.01560" ]
2308.12503#63
CGMI: Configurable General Multi-Agent Interaction Framework
I teach my students to understand the importance of creativity in every activity, such as in personal life, learning, and work. â Scoreâ : [| â Descriptionâ : I often assign some homework that requires students to complete independently. â Scoreâ : |] â Descriptionâ : Good students always pay attention to listen to the teacher's instructions. â Scoreâ : [] â Descriptionâ : Students should do what teachers ask them to do. â Scoreâ :[] â Descriptionâ : I like to teach according to the instructions in the textbook manual. â Scoreâ :[] â Scoreâ :[] â Descriptionâ :
2308.12503#62
2308.12503#64
2308.12503
[ "2302.01560" ]
2308.12503#64
CGMI: Configurable General Multi-Agent Interaction Framework
I prefer having students do homework on assigned topics rather than letting them choose topics freely. â Scoreâ : [] â Descriptionâ : I think textbooks should include specific steps on how to teach each activity. â Scoreâ :[] â Descriptionâ : I think it's equally important for teachers to let administrators know about teaching as the teaching itself. â Scoreâ : [] â Descriptionâ : Students should follow the teacher's steps closely when learning. â Scoreâ : [] â Descriptionâ : Teachers should continuously provide feedback on studentsâ learning progress. â Scoreâ :[] â Descriptionâ : In schools, the best way for teachersâ professional growth is to provide opportunities for teachers to observe each other's classes and have time to evaluate each other's teaching. â
2308.12503#63
2308.12503#65
2308.12503
[ "2302.01560" ]
2308.12503#65
CGMI: Configurable General Multi-Agent Interaction Framework
Scoreâ :[] â Descriptionâ : Students need to learn to critically evaluate and criticize the materials they read. â Scoreâ : [] â Descriptionâ : Judicial â Scoreâ :[] â Descriptionâ : Teachers need to do a lot of self-reflection, analysis, and evaluation of their own work. â Scoreâ :[] â Descriptionâ : Understanding concepts is more important than simply rote learning or teaching methods to remember concepts. â Scoreâ : [] â Descriptionâ : I think that for most materials students read, what they get out of it is quite superficial. â Scoreâ : [] â Descriptionâ :
2308.12503#64
2308.12503#66
2308.12503
[ "2302.01560" ]
2308.12503#66
CGMI: Configurable General Multi-Agent Interaction Framework
One of the most important jobs of teachers is to assess studentsâ learning status. â Scoreâ :[] â Descriptionâ : Teachers must enable students to understand the conceptual knowledge related to the course, not just provide some facts. â Scoreâ :[] â Descriptionâ : I like to focus on the general concepts of the subjects I teach, rather than list a lot of factual details. â Scoreâ :[] â Descriptionâ : When I prepare for lessons, I would prepare the main points to teach, leaving the details for students to find out by themselves. â Scoreâ : [ ] â Descriptionâ : Global â Scoreâ :[] â Descriptionâ :
2308.12503#65
2308.12503#67
2308.12503
[ "2302.01560" ]
2308.12503#67
CGMI: Configurable General Multi-Agent Interaction Framework
I like to teach students a method that can be used to solve various problems. â Scoreâ : [] â Descriptionâ :I prefer to explain to students the scope and conditions of applying a problem, rather than explain the details. â Scoreâ : [|] â Descriptionâ : I think students should learn how to understand some key issues and the context these issues exist in. â Scoreâ : [] â Descriptionâ : The main task of teachers is to provide students with a way of thinking that can be universally applied in various aspects. â
2308.12503#66
2308.12503#68
2308.12503
[ "2302.01560" ]
2308.12503#68
CGMI: Configurable General Multi-Agent Interaction Framework
Scoreâ : [] â Descriptionâ : Teachers must provide students with a lot of concrete and detailed course materials. â Scoreâ :[] â Descriptionâ : I like to ask questions that require students to answer with accurate, precise and very detailed knowledge. â Scoreâ : [ â Descriptionâ : For students, the most important thing is to know a lot of facts and details, then they can learn how to analyze and synthesize. â Scoreâ : â Scoreâ :[] â Descriptionâ :
2308.12503#67
2308.12503#69
2308.12503
[ "2302.01560" ]
2308.12503#69
CGMI: Configurable General Multi-Agent Interaction Framework
I think the focus of teaching is to master factual details. â Scoreâ :[] â Descriptionâ : I like to explain specific steps and detailed things to students. â Scoreâ : [|] â Descriptionâ : Teaching is imparting facts and enabling students to obtain a lot of useful information. â Scoreâ :[] â Descriptionâ : I prefer discussions or learning around concrete issues that allow me to focus on a large number of details. â Scoreâ : [] â Descriptionâ : Teachers must pay constant attention to curriculum and teaching reforms to understand the direction of education. â Scoreâ : |] â Descriptionâ :
2308.12503#68
2308.12503#70
2308.12503
[ "2302.01560" ]
2308.12503#70
CGMI: Configurable General Multi-Agent Interaction Framework
Each year I choose some new textbooks or reference materials to supplement my teaching content. â Scoreâ :[] â Descriptionâ : Teachers and students must abandon old ways of thinking and learn new methods to face everything. â Scoreâ :[] â Scoreâ :[] â Descriptionâ : Teachers should raise questions and tell students about the contradictions and dilemmas they face in solving problems. â Scoreâ :[] â Descriptionâ : I like when students have different perspectives on the views I raise. â Scoreâ :[] â Descriptionâ :
2308.12503#69
2308.12503#71
2308.12503
[ "2302.01560" ]
2308.12503#71
CGMI: Configurable General Multi-Agent Interaction Framework
Teachers should see teaching or learning as an ongoing process of pedagogical innovation, problem-solving, and meeting challenges. â Scoreâ : [| â Descriptionâ : The role of teachers is to enable students to acquire knowledge through experimentation or evidencing approaches in the classroom. â Scoreâ :[] â Descriptionâ : I think textbooks selected by the school or administrative department are the best teaching materials. â Scoreâ :[] â Descriptionâ : Students should adopt the perspectives that teachers think are correct. â Scoreâ :[] â Descriptionâ : I like to follow some ready-made rules and procedures when teaching courses. â Scoreâ : [] â Scoreâ :[] â Descriptionâ :
2308.12503#70
2308.12503#72
2308.12503
[ "2302.01560" ]
2308.12503#72
CGMI: Configurable General Multi-Agent Interaction Framework
I prefer teaching the same subject and the same grade every year. â Scoreâ :[] â Descriptionâ : In my work, I like to use some topics, tests, and teaching methods that have proven successful. â Scoreâ : [] â Descriptionâ : We should measure a teacher's performance based on classroom order, behavioral requirements for students, students' level of courtesy, and their ability to give correct answers to questions. â Descriptionâ : I agree with teachers being more strict on classroom discipline. â Scoreâ
2308.12503#71
2308.12503#73
2308.12503
[ "2302.01560" ]
2308.12503#73
CGMI: Configurable General Multi-Agent Interaction Framework
:[] Figure 14: The Sternberg Thinking Styles in Teaching Inventory. â Descriptionâ : To better understand something, I first (a) Try it out. (b) Contemplate it deeply. â Choiceâ :[] â Deseriptionâ : When I'm learning something, I can't help but (a) Talk about it. (b) Think about it. â Choiceâ : [] â Descriptionâ : When facing a problem in a study group, I usually (a) Step forward and speak my mind. (b) Step back and listen to opinions. â Choiceâ : [] â Deseriptionâ :
2308.12503#72
2308.12503#74
2308.12503
[ "2302.01560" ]
2308.12503#74
CGMI: Configurable General Multi-Agent Interaction Framework
In the classes I take, (a) I usually get to know many classmates. (b) I know very few classmates. â Choiceâ :[] *Dese » : Processing Descriptionâ : When I do homework, I prefer to (a) Start answering right away. (b) First try to understand the question. â Choiceâ :[] Type: Active vs. Reflective â Descriptionâ : I like (a) Studying in a group. (b) Studying alone. â Choiceâ : [] â Typeâ : [1] â Descriptionâ :
2308.12503#73
2308.12503#75
2308.12503
[ "2302.01560" ]
2308.12503#75
CGMI: Configurable General Multi-Agent Interaction Framework
When I work, I like to (a) Give it a try. (b) Think before I act. â Choiceâ : [] â Descriptionâ : I remember best (a) What I see. (b) What I hear. â Choiceâ :{] â Descriptionâ : When I have to participate in a group project, I want (a) Everyone to brainstorm first and contribute ideas. (b) People to think separately, then come together to compare ideas. â Choiceâ : [] â Descriptionâ :
2308.12503#74
2308.12503#76
2308.12503
[ "2302.01560" ]
2308.12503#76
CGMI: Configurable General Multi-Agent Interaction Framework
I'm usually considered by others to be (a) Extroverted. (b) Reserved. â Choiceâ :[] â Descriptionâ : I think the idea of giving one grade to a cooperative group (a) Appeals to me. (b) Does not appeal to me. â Choiceâ :[] â Descriptionâ : I prefer to (a) Be practical in my work. (b) Be innovative. â Choiceâ : [] â Descriptionâ : If I were a teacher, I would prefer to teach (a) Courses about facts and practical matters. (b) Courses about ideas and theories. â Choiceâ :[] â Descriptionâ :
2308.12503#75
2308.12503#77
2308.12503
[ "2302.01560" ]
2308.12503#77
CGMI: Configurable General Multi-Agent Interaction Framework
I find it easier to learn (a) Factual content. (b) Conceptual content. â Choiceâ :[] â Descriptionâ : When reading non-fiction, I prefer (a) Things that tell me new facts and teach me how to do things. (b) Things that inspire me to think. â Choiceâ : [] â Descriptionâ : I prefer (a) Deterministic ideas. (b) Speculative ideas. â Choiceâ : â Descriptionâ : Perception setow te = = ul Type Sensory vs. Intuitive â Deseriptionâ : I prefer to be seen as: (a) Detail-oriented in my work. (b) Creative in my work. â Choiceâ :[] yee" 11 â Descriptionâ :
2308.12503#76
2308.12503#78
2308.12503
[ "2302.01560" ]
2308.12503#78
CGMI: Configurable General Multi-Agent Interaction Framework
When I read interesting stories, I like authors who (a) Get straight to the point. (b) Write in a novel and interesting way. â Choiceâ : [] â Deseriptionâ : When I carry out a task, I like to (a) Master one method. (b) Think of multiple methods. â Choiceâ : [ ] â Descriptionâ : When I want to compliment someone, I say they are (a) Very sensitive. (b) Very imaginative. â Choiceâ :[] â Deseriptionâ :
2308.12503#77
2308.12503#79
2308.12503
[ "2302.01560" ]
2308.12503#79
CGMI: Configurable General Multi-Agent Interaction Framework
The content I like in courses is mainly (a) Concrete materials (facts, data). (b) Abstract materials (concepts, theories). â Choiceâ : [] â Descriptionâ : When I'm doing calculations for a long time, (a) I like to repeat my steps and check my work carefully. (b) I find checking work very boring, and I force myself to do it. â Choiceâ :[] : When I reflect on things I've done in the past, most often, what comes to mind is (a) An image. (b) Some words. â Choiceâ :[] :
2308.12503#78
2308.12503#80
2308.12503
[ "2302.01560" ]
2308.12503#80
CGMI: Configurable General Multi-Agent Interaction Framework
My preferred medium for acquiring new information is (a) Pictures, diagrams, graphics, and images. (b) Written instructions and verbal information. â Choiceâ : [| : When reading a book with many illustrations, I usually (a) Pay close attention to the illustrations. (b) Focus on the text. â Choiceâ :[] : like teachers who (a) Draw many diagrams on the blackboard. (b) Spend a lot of time explaining. â Choiceâ : [] â Descriptionâ :
2308.12503#79
2308.12503#81
2308.12503
[ "2302.01560" ]
2308.12503#81
CGMI: Configurable General Multi-Agent Interaction Framework
What I remember best is (a) What I see. (b) What I hear. â Choiceâ :[] â Descriptionâ : Input Type: Visual vs. Verbal â Typeâ <1] â Descriptionâ : When I'm asked to go to a new place, I prefer (a) A map. (b) Written directions. â Choiceâ :[] â Deseriptionâ : When I see a diagram in class, I usually remember clearly (a) The diagram itself. (b) The teacher's explanation of the diagram. â Choiceâ : [] â Descriptionâ :
2308.12503#80
2308.12503#82
2308.12503
[ "2302.01560" ]
2308.12503#82
CGMI: Configurable General Multi-Agent Interaction Framework
When someone presents me with data, I prefer (a) Graphs and charts. (b) Text that summarizes the results. â Choiceâ :[] â Descriptionâ : When I meet people at a party, I usually remember (a) Their appearance. (b) Their self-introduction. â Choiceâ :[] â Deseriptionâ : For entertainment, I prefer to (a) Watch TV. (b) Read books. â Choiceâ :[] â Deseriptionâ : I can draw the places I've been to (a) Easily and quite accurately. (b) With difficulty and without many details. â Choiceâ :[] â Descriptionâ :
2308.12503#81
2308.12503#83
2308.12503
[ "2302.01560" ]
2308.12503#83
CGMI: Configurable General Multi-Agent Interaction Framework
I often (a) Understand the details of things, but not their overall structure. (b) Understand the overall structure of things, but not their details. â Choiceâ : {| â Descriptionâ : Once I understand (a) All parts of something, I can grasp its whole. (b) The whole of something, I know its components. â Choiceâ : [] â Descriptionâ : When I solve math problems, I often (a) Think about how to solve them step by step. (b) First look at the solution, then try to figure out the steps to solve. â Choiceâ :[] â Descriptionâ :
2308.12503#82
2308.12503#84
2308.12503
[ "2302.01560" ]
2308.12503#84
CGMI: Configurable General Multi-Agent Interaction Framework
I particularly like teachers who (a) Present material to me in a clear and organized way. (b) First give me an overview, then connect the material to other topics. â Choiceâ :[] â Descriptionâ : When I'm learning, (a) I always go step by step, believing that with effort, I will achieve results. (b) I sometimes feel completely lost, and then suddenly understand. â Choiceâ : [] â Descriptionâ : Understanding Type: Sequential vs. Global cea â Descriptionâ :
2308.12503#83
2308.12503#85
2308.12503
[ "2302.01560" ]
2308.12503#85
CGMI: Configurable General Multi-Agent Interaction Framework
When I study a new subject, I like to (a) Give it my all, trying to learn as much as I can. (b) Try to establish connections between the subject and other related subjects. â Choiceâ : [] ype": 1] â Descriptionâ : Some teachers give an outline before lecturing, this outline for me (a) Is helpful. (b) Is very helpful. â Choiceâ : [] : When solving problems in a group, I am more likely to (a) Think about the steps to solve the problem. (b) Think about possible outcomes and their applications in broader area. â Choiceâ :[] :
2308.12503#84
2308.12503#86
2308.12503
[ "2302.01560" ]
2308.12503#86
CGMI: Configurable General Multi-Agent Interaction Framework
When I write an article, I usually (a) Start by thinking about and writing the beginning, then proceed step by step. (b) Think about and write different parts of the article, then organize them. â Choiceâ :[] : When I think about a large amount of information, I usually (a) Pay attention to the details and overlook the overall picture. (b) First understand the big picture and then delve into the details. â Choiceâ : [] : When I analyze a story or novel, (a) I think of various plots and try to combine them to conceive a theme. (b) When I finish reading, I only know what the theme is, then I have to go back and look for related plots. â Choiceâ
2308.12503#85
2308.12503#87
2308.12503
[ "2302.01560" ]
2308.12503#87
CGMI: Configurable General Multi-Agent Interaction Framework
: [] â Descriptionâ : To better understand something, I first (a) Try it out. (b) Contemplate it deeply. â Choiceâ :[] â Deseriptionâ : When I'm learning something, I can't help but (a) Talk about it. (b) Think about it. â Choiceâ : [] â Descriptionâ : When facing a problem in a study group, I usually (a) Step forward and speak my mind. (b) Step back and listen to opinions. â Choiceâ : [] â Deseriptionâ :
2308.12503#86
2308.12503#88
2308.12503
[ "2302.01560" ]
2308.12503#88
CGMI: Configurable General Multi-Agent Interaction Framework
In the classes I take, (a) I usually get to know many classmates. (b) I know very few classmates. â Choiceâ :[] *Dese » : Processing Descriptionâ : When I do homework, I prefer to (a) Start answering right away. (b) First try to understand the question. â Choiceâ :[] Type: Active vs. Reflective â Descriptionâ : I like (a) Studying in a group. (b) Studying alone. â Choiceâ : [] â Typeâ : [1] â Descriptionâ :
2308.12503#87
2308.12503#89
2308.12503
[ "2302.01560" ]
2308.12503#89
CGMI: Configurable General Multi-Agent Interaction Framework
When I work, I like to (a) Give it a try. (b) Think before I act. â Choiceâ : [] â Descriptionâ : I remember best (a) What I see. (b) What I hear. â Choiceâ :{] â Descriptionâ : When I have to participate in a group project, I want (a) Everyone to brainstorm first and contribute ideas. (b) People to think separately, then come together to compare ideas. â Choiceâ : [] â Descriptionâ :
2308.12503#88
2308.12503#90
2308.12503
[ "2302.01560" ]
2308.12503#90
CGMI: Configurable General Multi-Agent Interaction Framework
I'm usually considered by others to be (a) Extroverted. (b) Reserved. â Choiceâ :[] â Descriptionâ : I think the idea of giving one grade to a cooperative group (a) Appeals to me. (b) Does not appeal to me. â Choiceâ :[] â Descriptionâ : I prefer to (a) Be practical in my work. (b) Be innovative. â Choiceâ : [] â Descriptionâ : If I were a teacher, I would prefer to teach (a) Courses about facts and practical matters. (b) Courses about ideas and theories. â Choiceâ :[] â Descriptionâ :
2308.12503#89
2308.12503#91
2308.12503
[ "2302.01560" ]
2308.12503#91
CGMI: Configurable General Multi-Agent Interaction Framework
I find it easier to learn (a) Factual content. (b) Conceptual content. â Choiceâ :[] â Descriptionâ : When reading non-fiction, I prefer (a) Things that tell me new facts and teach me how to do things. (b) Things that inspire me to think. â Choiceâ : [] â Descriptionâ : I prefer (a) Deterministic ideas. (b) Speculative ideas. â Choiceâ : â Descriptionâ : Perception setow te = = ul Type Sensory vs. Intuitive â Deseriptionâ : I prefer to be seen as: (a) Detail-oriented in my work. (b) Creative in my work. â Choiceâ :[] yee" 11 â Descriptionâ :
2308.12503#90
2308.12503#92
2308.12503
[ "2302.01560" ]
2308.12503#92
CGMI: Configurable General Multi-Agent Interaction Framework
When I read interesting stories, I like authors who (a) Get straight to the point. (b) Write in a novel and interesting way. â Choiceâ : [] â Deseriptionâ : When I carry out a task, I like to (a) Master one method. (b) Think of multiple methods. â Choiceâ : [ ] â Descriptionâ : When I want to compliment someone, I say they are (a) Very sensitive. (b) Very imaginative. â Choiceâ :[] â Deseriptionâ :
2308.12503#91
2308.12503#93
2308.12503
[ "2302.01560" ]
2308.12503#93
CGMI: Configurable General Multi-Agent Interaction Framework
The content I like in courses is mainly (a) Concrete materials (facts, data). (b) Abstract materials (concepts, theories). â Choiceâ : [] â Descriptionâ : When I'm doing calculations for a long time, (a) I like to repeat my steps and check my work carefully. (b) I find checking work very boring, and I force myself to do it. â Choiceâ :[] : When I reflect on things I've done in the past, most often, what comes to mind is (a) An image. (b) Some words. â Choiceâ :[] :
2308.12503#92
2308.12503#94
2308.12503
[ "2302.01560" ]
2308.12503#94
CGMI: Configurable General Multi-Agent Interaction Framework
My preferred medium for acquiring new information is (a) Pictures, diagrams, graphics, and images. (b) Written instructions and verbal information. â Choiceâ : [| : When reading a book with many illustrations, I usually (a) Pay close attention to the illustrations. (b) Focus on the text. â Choiceâ :[] : like teachers who (a) Draw many diagrams on the blackboard. (b) Spend a lot of time explaining. â Choiceâ : [] â Descriptionâ :
2308.12503#93
2308.12503#95
2308.12503
[ "2302.01560" ]
2308.12503#95
CGMI: Configurable General Multi-Agent Interaction Framework
What I remember best is (a) What I see. (b) What I hear. â Choiceâ :[] â Descriptionâ : Input Type: Visual vs. Verbal â Typeâ <1] â Descriptionâ : When I'm asked to go to a new place, I prefer (a) A map. (b) Written directions. â Choiceâ :[] â Deseriptionâ : When I see a diagram in class, I usually remember clearly (a) The diagram itself. (b) The teacher's explanation of the diagram. â Choiceâ : [] â Descriptionâ :
2308.12503#94
2308.12503#96
2308.12503
[ "2302.01560" ]
2308.12503#96
CGMI: Configurable General Multi-Agent Interaction Framework
When someone presents me with data, I prefer (a) Graphs and charts. (b) Text that summarizes the results. â Choiceâ :[] â Descriptionâ : When I meet people at a party, I usually remember (a) Their appearance. (b) Their self-introduction. â Choiceâ :[] â Deseriptionâ : For entertainment, I prefer to (a) Watch TV. (b) Read books. â Choiceâ :[] â Deseriptionâ : I can draw the places I've been to (a) Easily and quite accurately. (b) With difficulty and without many details. â Choiceâ :[] â Descriptionâ :
2308.12503#95
2308.12503#97
2308.12503
[ "2302.01560" ]
2308.12503#97
CGMI: Configurable General Multi-Agent Interaction Framework
I often (a) Understand the details of things, but not their overall structure. (b) Understand the overall structure of things, but not their details. â Choiceâ : {| â Descriptionâ : Once I understand (a) All parts of something, I can grasp its whole. (b) The whole of something, I know its components. â Choiceâ : [] â Descriptionâ : When I solve math problems, I often (a) Think about how to solve them step by step. (b) First look at the solution, then try to figure out the steps to solve. â Choiceâ :[] â Descriptionâ :
2308.12503#96
2308.12503#98
2308.12503
[ "2302.01560" ]
2308.12503#98
CGMI: Configurable General Multi-Agent Interaction Framework
I particularly like teachers who (a) Present material to me in a clear and organized way. (b) First give me an overview, then connect the material to other topics. â Choiceâ :[] â Descriptionâ : When I'm learning, (a) I always go step by step, believing that with effort, I will achieve results. (b) I sometimes feel completely lost, and then suddenly understand. â Choiceâ : [] â Descriptionâ : Understanding Type: Sequential vs. Global cea â Descriptionâ :
2308.12503#97
2308.12503#99
2308.12503
[ "2302.01560" ]
2308.12503#99
CGMI: Configurable General Multi-Agent Interaction Framework
When I study a new subject, I like to (a) Give it my all, trying to learn as much as I can. (b) Try to establish connections between the subject and other related subjects. â Choiceâ : [] ype": 1] â Descriptionâ : Some teachers give an outline before lecturing, this outline for me (a) Is helpful. (b) Is very helpful. â Choiceâ : [] : When solving problems in a group, I am more likely to (a) Think about the steps to solve the problem. (b) Think about possible outcomes and their applications in broader area. â Choiceâ :[] :
2308.12503#98
2308.12503#100
2308.12503
[ "2302.01560" ]
2308.12503#100
CGMI: Configurable General Multi-Agent Interaction Framework
When I write an article, I usually (a) Start by thinking about and writing the beginning, then proceed step by step. (b) Think about and write different parts of the article, then organize them. â Choiceâ :[] : When I think about a large amount of information, I usually (a) Pay attention to the details and overlook the overall picture. (b) First understand the big picture and then delve into the details. â Choiceâ : : When I analyze a story or novel, (a) I think of various plots and try to combine them to conceive a theme. (b) When I finish reading, I only know what the theme is, then I have to go back and look for related Figure 15:
2308.12503#99
2308.12503#101
2308.12503
[ "2302.01560" ]
2308.12503#101
CGMI: Configurable General Multi-Agent Interaction Framework
The Solomonâ s Learning Styles Inventory.
2308.12503#100
2308.12503
[ "2302.01560" ]
2308.12284#0
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
3 2 0 2 g u A 3 2 ] L C . s c [ 1 v 4 8 2 2 1 . 8 0 3 2 : v i X r a # D4: Improving LLM Pretraining via Document De-Duplication and Diversification Kushal Tirumala* Meta AI Research Daniel Simig* Meta AI Research Armen Aghajanyan Meta AI Research Ari S. Morcos Meta AI Research # Abstract Over recent years, an increasing amount of compute and data has been poured into training large language models (LLMs), usually by doing one-pass learning on as many tokens as possible randomly selected from large-scale web corpora. While training on ever-larger portions of the internet leads to consistent perfor- mance improvements, the size of these improvements diminishes with scale, and there has been little work exploring the effect of data selection on pre-training and downstream performance beyond simple de-duplication methods such as Min- Hash. Here, we show that careful data selection (on top of de-duplicated data) via pre-trained model embeddings can speed up training (20% efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up to 2%) at the 6.7B model scale. Furthermore, we show that repeating data intelligently consistently outperforms baseline training (while repeating random data performs worse than baseline training). Our results indicate that clever data selection can significantly improve LLM pre-training, calls into question the common practice of training for a single epoch on as much data as possible, and demonstrates a path to keep improving our models past the limits of randomly sampling web data.
2308.12284#1
2308.12284
[ "2006.05929" ]
2308.12284#1
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
# Introduction Due to computational limits, initial work on language model pre-training focused on training models on small, high-quality text datasets such as BookCorpus [61] and Wikipedia [32]. More recently, however, catalyzed by works like [40], advancements in large language models (LLMs) have been driven by leveraging large collections of unlabeled, uncurated data derived from snapshots of the internet (CommonCrawl [16, 39, 41]), trading off small quantities of heavily-curated data for huge quantities of less-curated data. Because of the dramatic increase in data quantity, these strategies have resulted in higher performance models and have sparked a new paradigm wherein massive, largely unfiltered datasets are utilized for training [11, 46, 50]. Despite the essential role that large-scale web data now play in LM pre-training, data curation and selection for large-scale web data have not been thoroughly explored. This is primarily due to the universality of compute and data scaling laws [20, 25] which give practitioners a low-risk way to reliably improve LM performance by merely adding â
2308.12284#0
2308.12284#2
2308.12284
[ "2006.05929" ]
2308.12284#2
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
moreâ data, not necessarily the â rightâ data. Indeed, the data selection method used to model scaling laws (along with the data selection methods used in most LLM pre-training pipelines) involves simply randomly sampling tokens from web data dumps that have been put through a combination of simple heuristic filtering (e.g., to eliminate very short strings) and very near match de-duplication [27]. If we continue relying on scaling laws to improve LLMs, we will quickly hit diminishing returns due to the power-law nature of scaling laws. We will therefore need exponentially more data to maintain a consistent marginal improvement, which may prove especially challenging as we are fast Equal contribution. Correspondence emails: [email protected], [email protected] Preprint. Under review. approaching the limits of available human-generated text data [51]. Encouragingly, in the context of vision, Sorscher et al. [47] demonstrated that we could leverage simple data selection strategies to overcome costly power-law scaling. They compare numerous data selection methods and find that clustering data points in a pre-trained embedding space and ranking according to the distance to the cluster centroid ("SSL Prototypes") significantly improves the data efficiency of vision models. Recently, Abbas et al. [1] demonstrated that using a pre-trained embedding space to de-duplicate data ("SemDeDup") improves both efficiency and performance of vision-language models such as CLIP. However, there has been little exploration of these or related approaches in training LLMs at scale. Motivated by this, we argue that by combining these approaches and applying them to LLMs, relatively simple data selection strategies leveraging pre-trained embeddings can significantly improve LLM training.
2308.12284#1
2308.12284#3
2308.12284
[ "2006.05929" ]
2308.12284#3
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
Specifically, our contributions are as follows: â ¢ We investigate different data selection strategies for standard LLM pre-training setups where data has already been manually filtered / de-duplicated (e.g., MinHash), and where we do not know the target distribution for which we optimize performance. We argue that the performance of SSL Prototypes is affected by duplicate-driven clusters in the embedding space. In Section 3.4 we propose a new data selection strategy D4 that utilizes SemDeDup to avoid getting impacted by such clusters. In Section 4.1, we show that in the compute-limited regime where we have â infiniteâ source data and train models with fixed token budgets, we can achieve better pre-training perplexity and downstream accuracy than random iid data selection and previously established methods. Furthermore, we show that our method D4 can achieve around 20% efficiency gains at the 6.7b model scale, and that the magnitude of efficiency gains increases with model scale. â ¢ In the data-limited regime, where we run out of data and must epoch over data, cleverly choosing what data to repeat can beat training on randomly selected new data, whereas randomly choosing data to repeat underperforms adding new data (Section 4.2). This calls into question the standard practice of single epoch LLM training, and suggests that epoching over intelligently subselected data might be a better approach.
2308.12284#2
2308.12284#4
2308.12284
[ "2006.05929" ]
2308.12284#4
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
â e baseline â e- D4 Non Web Snapshots Instructions + Answers (ppl) 0-shot Downstream Acc. 14.5 60.04 L 16.04 ° 14.04 ry 9 ass 22.18% faster 18.08% faster [ | s95/ 2.04% better + 13.54 r oO 50.0 L 15.04 a 13.04 Lg 3 58.55 L aus b a 1254 go a S se04 a 14.04 L 12.04 b c 13.54 L 1354 in g 57.54 L 11.04 F 4 L pod NESS ee i 57.0 10.54 F 565-4 L T T T T T T T T T T T T T T 20B 40B 60B 80B 100B 20B 40B 60B 80B 100B 70B 80B 90B 100B Number of Tokens Seen # a Number of Tokens Seen Figure 1: Learning curves for 6.7B OPT model pretraining on 100B tokens, with data selected with D4 (pink line) and randomly (gray line). D4 significantly outperforms baseline training, getting between 18-20% efficiency gains on validation perplexity and 2% increase in average 0-shot downstream accuracy across 16 NLP tasks. See Section A.2 for full learning curves. # 2 Related Work Data selection in non-text domains: Numerous works have successfully used data selection techniques in vision models [6, 10, 23, 31, 34, 38, 49], though these have largely been at sub- ImageNet scale. Some of these works develop pruning metrics that score individual data points (for example, EL2N from Paul et al. [38]), while some focus on data-efficiency and attempt to find groups of points that allow models to reach baseline performance with less data points, e.g., coresets [9, 35, 44, 60]. Sorscher et al. [47] compares many of the existing individual-score methods at ImageNet scale, finding that their SSL prototypes metrics and the (prohibitively expensive)
2308.12284#3
2308.12284#5
2308.12284
[ "2006.05929" ]
2308.12284#5
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
2 memorization metric from Feldman and Zhang [15] generally outperforms other methods. In the audio domain, Dong et al. [14] computes importance embeddings to find important training samples for audio scene classification. More recently, Abbas et al. [1] demonstrated very encouraging results on vision-language models (CLIP models) using SemDeDup â a similar method to SSL prototypes but focused on semantic deduplication. Our work combines these approaches and applies them to large-scale LLMs. Effect of pre-training data on LM performance: Gao et al. [16] trains variants of GPT-2 [40] models from scratch to compare the "Pile" dataset to CommonCrawl-derived corpora. Radford et al. [40] demonstrates the positive impact of the quality filters and data de-duplication methods used to curate MassiveWeb by training 1.4B parameter models from scratch. Hernandez et al. [19] quantifies the effect of various amounts of artificially created data duplication and provides analysis on interpreting the changes in the behaviour of the models trained on duplicated data. Concurrently to our work, Xie et al. [56] propose using importance resampling to align the distribution of web data to high-quality reference corpora such as Wikipedia. Similarly, Gururangan et al. [17] explores data selection strategies for adapting LMs to a task-specific corpus. Another line of recent work explores how data mixture affects pre-training, with Xie et al. [55] demonstrating impressive improvements in downstream accuracy and perplexity across all datasets for 8B parameter models trained on the Pile. Similarly, Longpre et al. [30] explores the role of text quality, toxicity, age, and domain distribution of training data on LLM performance. Outside of data curation, there has been a recent surge of work exploring the impact of repeating data [5, 37, 57], generally concluding that repeating tokens is worse than training on new tokens (which we question in Section 4.2).
2308.12284#4
2308.12284#6
2308.12284
[ "2006.05929" ]
2308.12284#6
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
# 3 Experimental Setup Notation Given a source dataset, Dsource, of documents (crawled web pages) and model architec- ture, M , we aim to find a strategy S for selecting a subset of these documents that maximizes some evaluation metric E(M (DS,R)). R indicates the proportion of remaining documents from the source dataset Dsource after selecting data with strategy S. For this reason, we refer to R throughout this work as the selection ratio: for example, if R = 0.25 and |Dsource| = 100 million, then we select 25% of documents from a source dataset of size 100M documents to arrive at a a training dataset with 25M documents. We operate at the granularity of a single document, independently of how the model trainer would pack these documents into batches later. Throughout the paper, we use random selection as the baseline for S, as it is the most common method for selecting data for language model pre-training. In the rest of this section, we describe our choices of source dataset (Dsource), model (M ), evaluation metric (E), and, most importantly, our suggestions for the selection strategy (S). # 3.1 Training Dataset (choice for Dsource) We perform all of our training runs on a version of CommonCrawl pre-processed with a CCNet [54] pipeline identical to the one used by Touvron et al. [50]. We add an additional step of MinHash-based de-duplication (see more details in Section A.1). Applying this common step before our experiments guarantees that any effects observed in our experiments complement the currently prevalent approach of MinHash-based data de-duplication strategies. Throughout the rest of this work, we refer to this dataset as CC-dedup.
2308.12284#5
2308.12284#7
2308.12284
[ "2006.05929" ]
2308.12284#7
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
# 3.2 Model Training (choices for M and Ttarget) To evaluate different configurations of data selection strategies, we train OPT [59] models from scratch on the pruned versions of datasets. We use the standard model architectures and settings of Zhang et al. [59] and use MetaSeq [59] to train all our models. For 125M models, we train to Ttarget = 3B tokens. For 1.3B parameter models, we train to target token count of Ttarget = 40B. For 6.7B parameter models, we train to Ttarget = 100B tokens. We choose these by trimming down the token budgets suggested by Hoffmann et al. [20] to meet our compute limitations. We provide full details of our training setup in Section A.1. # 3.3 Evaluation Metrics (choices for E) We keep most of our evaluation consistent with the setup from Zhang et al. [59]. 3 Validation Set Perplexity. Our validation sets mainly come from [59], which includes validation sets derived from subsets of the Pile [16] such as CommonCrawl, DM Mathematics, HackerNews, OpenSubtitles, OpenWebText2, Project Gutenberg, USPTO, Wikipedia. We also include a validation set obtained from the PushShift.io Reddit dataset [4] (which we refer to as redditflattened). In addition, we measure perplexity on a validation set obtained from a train-validation split of our source dataset CC-dedup, and a validation set from C4 [41]. We notice that the effects of data selection vary significantly on individual validation sets depending on whether the validation set was derived from a web data corpus or not (see more details and analysis in Section 4.4.1). Motivated by this, we split validation sets into Web-snapshots (C4, CommonCrawl, and CC-dedup) and Non-web snapshots, and report average perplexity within these sets. Downstream Task Accuracy. To evaluate downstream performance of our trained models, we report average 0-shot accuracy across the 16 NLP tasks from Zhang et al. [59], and use a prompting methodology consistent with Zhang et al. [59].
2308.12284#6
2308.12284#8
2308.12284
[ "2006.05929" ]
2308.12284#8
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
These set of 16 NLP tasks include Arc Challenge and ArcEasy [12], HellaSwag [58], OpenBookQA [33], PIQA [7], StoryCloze [36], Winograd [28], Winogrande [42], as well as tasks from SuperGLUE [52]. We refer the reader to Zhang et al. [59] for more information about this evaluation setup. Instruction Tuning Perplexity. The evaluation mentioned above metrics presents an inherent trade- off. Though accuracy on downstream tasks is typically viewed as a more concrete representation of a language modelâ s real-world value, its variance tends to be higher due to the limited number of examples in these tasks and the step-wise behavior of accuracy as a metric. In contrast, perplexity, as a metric, is smoother while still exhibiting a strong correlation with performance [43]. Therefore as a middle ground between the two evaluation metrics, we propose evaluating the perplexity on a sample drawn from the instruction-tuning dataset used for fine-tuning OPT-IML [21]. This dataset spans over 1500 unique NLP tasks and comprises a wide array of prompt-answer pairs and therefore is representative of the average NLP task. It has been carefully crafted by merging extensive task collections such as Super-NaturalInstructions [53] and PromptSource [3]. We refer the reader to Table 2.1 in [21] for a comprehensive breakdown. This approach allows us to balance practical performance measures and statistical consistency in evaluation. We note that this metric can simply be considered as perplexity on another validation set, where the validation set is filled with examples used for instruction-tuning (we are not fine-tuning on this dataset). # 3.4 Data Selection Strategies (choices for S) In our initial exploration of un-curated web data, we embedded a large sample of web documents, clustered these embeddings, and manually inspected the resulting clusters. We quickly identified several high density clusters with documents that had little to do with the natural distribution of human language and were artifacts of the web crawling: for example, advertisements of Nike shoes that were automatically generated from a single underlying template with minor modifications (see Section A.9 for details).
2308.12284#7
2308.12284#9
2308.12284
[ "2006.05929" ]
2308.12284#9
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
Motivated by the intuition that these duplicate-driven clusters need tshould be pruned, as well as the recent success of pruning methods in vision and vision-language models [1, 47], we focus our efforts on data selection strategies that manipulate data points based on their position in an embedding space. We embed each document by feeding it into a 125M OPT model and use the last-layer embedding of the last token (we experiment with different embedding spaces in Section A.7). Following this, we experiment with several approaches:
2308.12284#8
2308.12284#10
2308.12284
[ "2006.05929" ]
2308.12284#10
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
SemDeDup: Abbas et al. [1] proposed de-duplicating in both text and image domains by first using K-Means to cluster the embedding space, and removing points in each cluster that are within epsilon- balls of one another. We use this algorithm without any modifications and refer the reader to Abbas et al. [1] for implementation details of this algorithm. Prototypicality: Sorscher et al. [47] investigated a large variety of data pruning strategies to improve the data efficiency of training image classification models, including a newly introduced "SSL Prototypes" metric that proved to be one of their best methods. This strategy involves first clustering the embedding space using k-means clustering and discarding data points in increasing order of their distance to the nearest cluster centroid, such that the most "prototypical" data points are discarded, enriching the much higher variance outliers. We refer the reader to Sorscher et al. [47] for a more detailed description of this algorithm.
2308.12284#9
2308.12284#11
2308.12284
[ "2006.05929" ]
2308.12284#11
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
4 D4: As mentioned previously, we find many instances of duplicate-driven clusters: clusters of templated text or extremely semantically redundant information that are not removed by MinHash. These regions of embedding space tend to be very dense and cause k-means to waste valuable cluster assignments on duplicated text. This biased clustering could also negatively to impact the effectiveness of SSL Prototypes since many clusters will be entirely driven by duplicates instead of more topical coherence. This insight lead us to our proposed strategy: 1.
2308.12284#10
2308.12284#12
2308.12284
[ "2006.05929" ]
2308.12284#12
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
Apply SemDeDup with a selection ratio Rdedup on the entire dataset D, producing a smaller dataset Dâ ² 2. Cluster points in Dâ ² with K-Means 3. Apply SSL Prototypes on Dâ ², with a selection ratio Rproto The above-described strategy has an overall selection ratio of R = Rdedup â Rproto and intends to diversify the distribution of our data locally and globally. For brevity we refer to this method as D4, a shorthand for Document De-Duplication and Diversification. Throughout this work, we choose Rdedup = 0.75 and vary Rproto (we discuss this choice in Section A.1). In Section 4, we compare the performance of D4 to baseline training and other methods, and in Section 4.4 we analyze D4 and show that reclustering after semantic de-duplication indeed reduces the impact of duplicate-driven clusters (see Figure 7). # 4 Results
2308.12284#11
2308.12284#13
2308.12284
[ "2006.05929" ]
2308.12284#13
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
â eâ baseline â eâ semdedup â eâ ssl_prototypes â e D4 Web snapshots Non Web Snapshots 15.3 15.2 a a = 161 bo 15.1 16.0 a) 15.0 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Instructions + Answers (Perplexity) 0-shot Downstream Acc. 14.2 52.5 41 352.0 AS 14.0 s a 3 = 13.9 < c G 13.8 $ 13.7 1.00 0.80 0.60 040 0.20 0.00 1.00 080 0.60 040 0.20 0.00 Selection Ratio (R) Figure 2: Comparison of data selection methods on validation perplexity. Each point denotes a 1.3B OPT model trained on 40B tokens. The x-axis denotes the selection ratio R. The y-axis for the top 2 and bottom left graph depicts perplexity; the bottom right graph is average downstream on 16 NLP tasks from Zhang et al. [59]. The grey line denotes the value for baseline training. Shaded error is standard error across 3 seeds. Each point on this graph is trained on the same token budget: when we decrease R, we jointly increase the size of the source dataset (e.g. choosing 1/4 of documents from a 4xâ ed sized source dataset).
2308.12284#12
2308.12284#14
2308.12284
[ "2006.05929" ]
2308.12284#14
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
5 # 4.1 Fixed compute regime: can data selection help on fixed token budgets? In this section, we consider the fixed compute setting, where we curate and train on a fixed token budget by jointly increasing the size of the source dataset Dsource and decreasing R (the fraction of the Dsource which is selected), such that the target token budget remains constant. This setting is analogous to the most common paradigm for LLM training. As Dsource grows and R decreases, we select from larger and larger initial datasets, resulting in a larger set of high-quality data points to select from and increasing the overall quality of the selected set. For clarity, we plot performance as a function of the ratio of the Dsource to Dtarget. For each setting, we evaluate the performance of a baseline, SemDeDup alone, SSL Prototypes alone, and our proposed method D4. Validation Perplexity. In Figure 2, we show that a relatively small amount of data selection using any of the three methods (small R) brings consistent improvements on all validation sets. However, as we increase R, we observe opposing effects on web snapshot and non-web-snapshots validation sets. We analyze this discrepancy in-depth in Section 4.4. However, on the Instruct OPT validation set, which corresponds much more closely to the the high-quality generations we want our LLMs to achieve, we found that all three methods led to consistent and clear perplexity improvements. Notably, we found that while all three methods provided benefits, D4 outperformed using both SemDeDup and SSL Prototypes independently, with the most notable gains exhibited when the source dataset is around 4x the target dataset size. Given that D4 consistently improves with source dataset size, we estimate this gap to grow with source dataset size. Downstream Task Accuracy. In Figure 2, we also report 0-shot downstream accuracy averaged across a suite of NLP tasks. While the high variance of downstream accuracy makes it challenging to identify clear trends in the performance of various models, we again observe that 0-shot downstream accuracy generally increases with source dataset size. Our findings also hold at larger model scales. We pick our best-performing configuration from 1.3B OPT experiments (e.g., R = 0.25) and train 6.7B OPT models on 100B tokens.
2308.12284#13
2308.12284#15
2308.12284
[ "2006.05929" ]
2308.12284#15
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
Figure 1 shows the positive effects of applying D4 with R = 0.25 for a 6.7B model. The model trained on the pruned data reaches the same perplexity as the baseline model using 20% fewer update steps on average and achieves a 2% improvement in accuracy on our suite of downstream tasks at the end of the training - about as much difference as was reported by Zhang et al. [59] between the OPT and GPT-3 family of models on the same set of tasks (See Figure 3 of Zhang et al. [59]). # 4.2 Fixed data regime: what happens when we run out of data?
2308.12284#14
2308.12284#16
2308.12284
[ "2006.05929" ]
2308.12284#16
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
â eâ Random, New Tokens =¢@- Random, Repeated Tokens =¢@- D4, Repeated Tokens Non Web Snapshots Instruction + Answers 0-shot Downstream Acc. 204 \ r « N 19 4 r & is4 RA L > 174 New r 16 + Pr T T T T T T T T T T T T 10B 20B 30B 40B 10B 20B 30B 40B 20B 25B 30B 35B 40B Number of Tokens Seen Figure 3: Comparing new tokens vs. repeated tokens for random data selection and D4 for fixed selection ratio R = 0.25 for 1.3B OPT pre-training. Each method chooses 25% of documents from the source dataset Dsource, and epochs over that subset until the target token budget of 40B is reached. We observe that repeating tokens via D4 outperforms baseline training (random, new tokens). The results in Section 4.1 indicate that, given a fixed amount of compute for training, selecting data from larger and larger source datasets is a promising method to improve language model performance. However, there is a practical limit to how much data can be curated from the web and, therefore, a
2308.12284#15
2308.12284#17
2308.12284
[ "2006.05929" ]
2308.12284#17
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
6 # S # Ttotal # Tselected # Epochs Non-Web Snapshot PPL # Instruction + Answers PPL Random 40B 40B 40B 20B 1 2 16.27 ± 0.012 16.39 ± 0.011 (+0.12) 14.19 ± 0.003 14.37 ± 0.015 (+0.18) D4 40B 20B 2 16.10 ± 0.024 (-0.17) 13.85 ± 0.016 (â 0.34) Table 1: For fixed data selection method and source dataset size, we compare the effects of choosing new tokens or repeating token. All models are 1.3B OPT models trained on 40B tokens. Tselected denotes the number of tokens selected from the source dataset. The top row denotes baseline training. Mean and standard error across 3 seeds are shown. Surprisingly, cleverly choosing tokens to repeat via D4 outperforms randomly selecting new tokens. natural limit to the size of the source dataset. What happens when we run out of data? Hernandez et al. [19] found and analyzed disproportionately adverse effects of repeated data points in the training data. Similarly, concurrently to our work Muennighoff et al. [37] shows that test loss deteriorates when epoching over a random subset of C4 more than four times. In this section, we investigate how the use of D4 affects model performance in this limited data, multi-epoch setting. To test this, we assume a fixed token budget and a fixed data size which matches the token budget. We evaluate training on all the data as well as for two epochs on subsets of the data selected either randomly or using D4. We trained 1.3B parameter OPT models on these configurations and report average perplexity in Table 1. Unsurprisingly, epoching over a randomly selected subset of the data instead of using all the available data once leads to a slight degradation in model perplexity. In contrast, repeating data selected by D4 leads to an improvement in perplexity and downstream accuracy over randomly sampling new tokens. In other words, it is beneficial to select data via D4 and epoch 2 times, instead of doing one-pass learning on all available data.
2308.12284#16
2308.12284#18
2308.12284
[ "2006.05929" ]
2308.12284#18
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
As seen in Figure 3, this finding generally holds across training as well. We refer to Section A.6 for results across model scale and data selection ratio. To the best of our knowledge, this is the first result to demonstrate the benefits of repeating data for LLM pre-training, over randomly sampling new tokens via a principled data selection technique. We argue that the optimal way of using large-scale web data to pre-train LLMs could be: strategically choose a significantly smaller but better-distributed subset of the data and epoch over it multiple times. # 4.3 Cost of data selection In Section 4.1, we find that by training a 6.7B parameter model on data selected by D4, we reach the final perplexity of a baseline model using 20% fewer model updates. In our particu- lar setup, this translates to saving approximately 4300 GPU hours - we will refer to this as the naive efficiency gain as it does not account for the the cost of computing the selection metric. To demonstrate our methodâ s practicality, we must ensure the cost of selecting data is significantly less than this. As described in Section 3.4, selecting data via D4 involves: first, embedding documents via a 125M OPT model; second, computing K-Means in- dices + distance to indices. The first step is completed on a single machine with 96 CPU cores in approxi- mately one day. Given the two orders of magnitude difference between the prices of CPU and GPU cores 1, we consider this cost negligible. For the second step, embedding 400B tokens with a 125M parameter model takes approximately 888 GPU hours, using the same A100 GPUs. Subtracting this from the naive efficiency gain of 4300 GPU hours, we arrive at an overall efficiency gain of 3412 GPU hours. This is how much compute D4 saved us in practice when training our single 6.7B parameter model.
2308.12284#17
2308.12284#19
2308.12284
[ "2006.05929" ]
2308.12284#19
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
In Fig- _ instruct + Answers Efficiency ode size og Scale) -*- Naive Efficiency --- Overall Efficiency 15 10 Efficiency Gain (9% Compute Saved) Figure 4: Naive and overall efficiency gain of data selection via D4 relative to the total cost of training as a function of model size on Instruct + Answers perplexity at R = 0.25. # 1Source: https://aws.amazon.com/ec2/pricing/on-demand/ 7 ure 4, we redo this calculation for different model sizes and we see that overall efficiency gain increases with model size. Based on this, we can conservatively estimate that D4 would have overall efficiency gains of 20% for LLama-65B [50] and 22% for OPT-175B [59]. # 4.4 Analysis of D4 # 4.4.1 Why does data selection hurt performance on web snapshots? # c4 08 Web-Independent 0.0 4 i) a L 1 T Original PPL Count (%) = £06 4 _ L 3.25 fal c Web snapshots Web-derived > 3.00 iA St = opto ftetel Zos- T L 275 2 2 2 a 0.07 @ 0.44 +r _ I L o - a 2 â ¬ T 2 006 fo3s4 7 ha} | 3 a a 5 a pa; [4A $9.05 - v 5 | £024 a a L 3 | dl la} Ta} [At 2 0.04 8 â y é k Por tL | 5 003 4+ L = L Es N oo, + tL = + 0.02 T T T T T T T T T T T é 8 : 2 ey â é 2 RS & @ 0.0 01 0.2 03 04 05 06 es FF FF SF FC FS SF SF Cosine Distance to NN in train (binned) 3 © é é $ & & SS s ee < & es 2 $ s s s ¢ & â ¬ â SS SS Figure 5: Left:
2308.12284#18
2308.12284#20
2308.12284
[ "2006.05929" ]
2308.12284#20
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
Train-test similarity across validation sets. X-axis denotes the name of the validation set (refer to Section 3.4 for more information about each validation set), and y-axis denotes the cosine distance to the nearest neighbor in the training set for the 1.3B OPT 40B baseline (the green triangle denotes mean, and the yellow bar denotes median). We observe that web-snapshots validation sets are closest to points in the training set. Right: Analysis of the C4 validation set. (Top): Histogram of cosine distance to nearest neighbor in train. For each bin, we show the mean original perplexity (middle) and mean difference in perplexity after data selection (bottom). "Easy" (low original ppl) points close to the training set are generally the points most affected by data selection. While we observe consistent average perplexity improvements, Section A.3 demonstrates that this perplexity improvement varies greatly across validation sets. More importantly, data selection always impairs performance on web snapshot validation sets such as CC-dedup, CommonCrawl, and C4. To investigate why this occurs, we embed each validation set into the same embedding space as the training set and search for the nearest neighbors to validation points in the training set for our 1.3B baseline model. In the left plot of Figure 5, we show that validation sets drawn from the same distribution as web-snapshots are closer to training set compared to other validation sets, while the right plot of Figure 5 shows that data selection disproportionately affects these web-snapshot validation sets: on the top-right plot, we see that web validation sets reside in regions of the embedding space which are sparsified as a result of data selection (e.g. regions of space close to cluster centroids in the training set), and in the bottom-right plot we see that these points are also the most affected by data selection, since their perplexity after data selection significantly increases. Moreover, the middle- right plot shows that these validation points have the lowest perplexity before pruning indicating that these points are "easy" points, perhaps due to their proximity to the training set. Given that some of our validation sets are extremely close to the training set, we question whether they are still strong indicators of generalization.
2308.12284#19
2308.12284#21
2308.12284
[ "2006.05929" ]
2308.12284#21
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
In fact, in Figure 6, we find evidence of a slight inverse relationship between perplexity on web snapshots and more robust indicators of LM ability, such as perplexity on instruction-tuned datasets and downstream accuracy. In contrast, we observe that perplexity on Instruct+Answers is positively correlated with downstream accuracy, suggesting that validation perplexity on instruction tuned data is a better measure of model quality. For this reason, we group most of our results in Section 4 into Web Snapshots and Non-web Snapshots (which consists of Web-Derived + Web-Independent from Figure 5, see Section A.1.4 for a full-list of validation set names).
2308.12284#20
2308.12284#22
2308.12284
[ "2006.05929" ]