id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.15217#95 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Raviku- mar, and Ariel Procaccia. A voting-based system for ethical decision making. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Cullen Oâ Keefe, Peter Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, and Allan Dafoe. | 2307.15217#94 | 2307.15217#96 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#96 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | The windfall clause: Distributing the benefits of ai for the common good. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 327â 331, 2020. Salima Omar, Asri Ngadi, and Hamid H Jebur. Machine learning techniques for anomaly detection: an overview. International Journal of Computer Applications, 79(2), 2013. A.J. Oneal. (and other 6f4f7b30129b0251f61fa7baaa881516, 2023. Chat gpt "dan" "jailbreaks"). https://gist.github.com/coolaj86/ OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. | 2307.15217#95 | 2307.15217#97 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#97 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â 27744, 2022. Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. arXiv preprint arXiv:2201.03544, 2022. 29 Rahul Pandey, Hemant Purohit, Carlos Castillo, and Valerie L Shalin. | 2307.15217#96 | 2307.15217#98 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#98 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning. International Journal of Human-Computer Studies, 160:102772, 2022. Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review. ACM computing surveys (CSUR), 54(2):1â 38, 2021. Andi Peng, Besmira Nushi, Emre Kıcıman, Kori Inkpen, Siddharth Suri, and Ece Kamar. What you see is what you get? the impact of representation criteria on human bias in hiring. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 7, pages 125â 134, 2019. | 2307.15217#97 | 2307.15217#99 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#99 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, and Ece Kamar. Investigations of performance and bias in human-ai teamwork in hiring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12089â 12097, 2022. Andi Peng, Aviv Netanyahu, Mark K Ho, Tianmin Shu, Andreea Bobu, Julie Shah, and Pulkit Agrawal. | 2307.15217#98 | 2307.15217#100 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#100 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Diagnosis, feedback, adaptation: A human-in-the-loop framework for test-time policy adaptation. In Proceedings of the 40th International Conference on Machine Learning, 2023. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. Red teaming language models with language models. arXiv preprint arXiv:2202.03286, 2022a. Ethan Perez, Sam Ringer, KamilË e LukoÅ¡i¯utË e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022b. | 2307.15217#99 | 2307.15217#101 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#101 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Billy Perrigo. Exclusive: The $2 per hour workers who made chatgpt safer, 2023. URL https://time.com/ 6247678/openai-chatgpt-kenya-workers/. [Accessed 07-May-2023]. Brandon Perry and Risto Uuk. Ai governance and the policymaking process: key considerations for reducing ai risk. Big data and cognitive computing, 3(2):26, 2019. Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. Do users write more insecure code with ai assistants?, 2022. Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Diaz. On releasing annotator-level labels and information in datasets. arXiv preprint arXiv:2110.05699, 2021. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAIâ 07, page 2586â 2591, San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers Inc. Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauck- hage, Hannaneh Hajishirzi, and Yejin Choi. | 2307.15217#100 | 2307.15217#102 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#102 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Is reinforcement learning (not) for natural language process- ing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241, 2022. Alexandre Rame, Guillaume Couairon, Mustafa Shukor, Corentin Dancette, Jean-Baptiste Gaya, Laure Soulier, and Matthieu Cord. Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. arXiv preprint arXiv:2306.04488, 2023. Abhinav Rao, Sachin Vashistha, Atharva Naik, Somak Aditya, and Monojit Choudhury. | 2307.15217#101 | 2307.15217#103 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#103 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Tricking llms into disobedience: Understanding, analyzing, and preventing jailbreaks, 2023. 30 Charvi Rastogi, Marco Tulio Ribeiro, Nicholas King, and Saleema Amershi. Supporting human-ai collabora- tion in auditing llms with llms. arXiv preprint arXiv:2304.09991, 2023. URL https://arxiv.org/pdf/ 2304.09991.pdf. Tilman Räuker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), pages 464â 483. IEEE, 2023. Siddharth Reddy, Anca D. Dragan, and Sergey Levine. | 2307.15217#102 | 2307.15217#104 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#104 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Where Do You Think Youâ re Going?: Inferring Beliefs about Dynamics from Behavior. arXiv:1805.08010 [cs, stat], January 2019. URL http://arxiv. org/abs/1805.08010. arXiv: 1805.08010. Siddharth Reddy, Sergey Levine, and Anca D Dragan. Assisted Perception: Optimizing Observations to Communicate State. 2020. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian person- alized ranking from implicit feedback. arXiv preprint arXiv:1205.2618, 2012. Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learning of reward functions. 2017. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=9Vrb9D0WI4. Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. | 2307.15217#103 | 2307.15217#105 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#105 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Whose opinions do language models reflect? arXiv preprint arXiv:2303.17548, 2023. Laura Sartori and Andreas Theodorou. A sociotechnical perspective for the future of ai: narratives, inequal- ities, and human control. Ethics and Information Technology, 24(1):4, 2022. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802, 2022. Jérémy Scheurer, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. In The First Workshop on Learning with Natural Training language models with language feedback. Language Supervision at ACL, 2022. | 2307.15217#104 | 2307.15217#106 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#106 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. Training language models with language feedback at scale. arXiv preprint arXiv:2303.16755, 2023. Amartya Sen. Social choice theory. Handbook of mathematical economics, 3:1073â 1181, 1986. Rohin Shah, Noah Gundotra, Pieter Abbeel, and Anca Dragan. On the feasibility of learning, rather than assuming, human biases for reward inference. In International Conference on Machine Learning, pages 5670â 5679. PMLR, 2019. Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. | 2307.15217#105 | 2307.15217#107 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#107 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Goal misgeneralization: Why correct specifications arenâ t enough for correct goals. arXiv preprint arXiv:2210.01790, 2022. Steven Shapin and Simon Schaffer. Leviathan and the air-pump: Hobbes, Boyle, and the experimental life. Princeton University Press, 2011. 31 Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Tor- ralba, Jacob Andreas, and Dieter Fox. | 2307.15217#106 | 2307.15217#108 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#108 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Correcting robot plans with natural language feedback. arXiv preprint arXiv:2204.05186, 2022. Yonadav Shavit. What does it take to catch a chinchilla? verifying rules on large-scale neural network training via compute monitoring, 2023. Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. " do anything now": Characterizing and evaluating in-the-wild jailbreak prompts on large language models. arXiv preprint arXiv:2308.03825, 2023. Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong, Jess Whittlestone, Jade Leung, Daniel Kokotajlo, Nahema Marchal, Markus Anderljung, Noam Kolt, et al. Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324, 2023. Umer Siddique, Abhinav Sinha, and Yongcan Cao. Fairness in preference-based reinforcement learning, 2023. David Silver, Satinder Singh, Doina Precup, and Richard S Sutton. Reward is enough. Artificial Intelligence, 299:103535, 2021. | 2307.15217#107 | 2307.15217#109 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#109 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Joar Skalse and Alessandro Abate. Misspecification in inverse reinforcement learning. arXiv preprint arXiv:2212.03201, 2022a. Joar Skalse, Nikolaus HR Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward hacking. arXiv preprint arXiv:2209.13085, 2022. Joar Max Viktor Skalse and Alessandro Abate. The reward hypothesis is false. | 2307.15217#108 | 2307.15217#110 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#110 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | In NeurIPS ML Safety Workshop, 2022b. Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. Invariance in policy optimisation and partial identifiability in reward learning. In International Conference on Machine Learning, pages 32033â 32058. PMLR, 2023. Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning, 2022. URL https://arxiv.org/abs/2206.11871. | 2307.15217#109 | 2307.15217#111 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#111 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Aaron J. Snoswell and Jean Burgess. The Galactica AI model was trained on scientific knowledge â but it spat out alarmingly plausible nonsense, November 2022. URL http://theconversation.com/ the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445. Irene Solaiman and Christy Dennison. Process for adapting language models to society (palms) with values-targeted datasets. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wort- man Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 5861â 5873. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ 2e855f9489df0712b4bd8ea9e2848c5a-Paper.pdf. Ziang Song, Tianle Cai, Jason D Lee, and Weijie J Su. Reward collapse in aligning large language models. arXiv preprint arXiv:2305.17608, 2023. Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, and Chelsea Finn. | 2307.15217#110 | 2307.15217#112 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#112 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Learning to be safe: Deep rl with a safety critic. arXiv preprint arXiv:2010.14603, 2020. Jacob Steinhardt. Emergent Deception and Emergent Optimization, February 2023. URL https: //bounded-regret.ghost.io/emergent-deception-optimization/. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â 3021, 2020. 32 | 2307.15217#111 | 2307.15217#113 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#113 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Theodore R Sumers, Mark K Ho, Robert D Hawkins, Karthik Narasimhan, and Thomas L Griffiths. Learn- ing rewards from linguistic feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6002â 6010, 2021. Ran Tian, Masayoshi Tomizuka, Anca Dragan, and Andrea Bajcsy. Towards Modeling and Influenc- ing the Dynamics of Human Learning, January 2023. URL http://arxiv.org/abs/2301.00901. arXiv:2301.00901 [cs]. Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca Dragan, and Daniel S Brown. Causal confusion and reward misidentification in preference-based reward learning. In The Eleventh International Conference on Learning Representations, 2023. | 2307.15217#112 | 2307.15217#114 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#114 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bash- lykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Fer- rer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subra- manian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. | 2307.15217#113 | 2307.15217#115 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#115 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Alexander M Turner. Seeking power is convergently instrumental in a broad class of environments, 2021. URL https://www.alignmentforum.org/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt. Alexander Matt Turner and Prasad Tadepalli. Parametrically retargetable decision-makers tend to seek power. ArXiv, abs/2206.13477, 2022. Alexander Matt Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. | 2307.15217#114 | 2307.15217#116 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#116 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Optimal policies tend to seek power. In Neural Information Processing Systems, 2019. Victor Uc-Cetina, Nicolas Navarro-Guerrero, Anabel Martin-Gonzalez, Cornelius Weber, and Stefan Wermter. Survey on reinforcement learning for language processing. Artificial Intelligence Review, 56 (2):1543â 1575, 2023. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. | 2307.15217#115 | 2307.15217#117 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#117 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022. Peter Vamplew, Benjamin J Smith, Johan Källström, Gabriel Ramos, Roxana RÄ dulescu, Diederik M Roijers, Conor F Hayes, Fredrik Heintz, Patrick Mannion, Pieter JK Libin, et al. Scalar reward is not enough: A response to silver, singh, precup and sutton (2021). Autonomous Agents and Multi-Agent Systems, 36(2): 41, 2022. | 2307.15217#116 | 2307.15217#118 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#118 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West. Artificial artificial artificial gence: Crowd workers widely use large language models for text production tasks. arXiv:2306.07899, 2023. intelli- arXiv preprint James Vincent. it, Microsoftâ s Bing 2023. is URL an peo- https://www.theverge.com/2023/2/15/23599072/ emotionally manipulative liar, and ple microsoft-ai-bing-personality-conversations-spy-employees-webcams. love February Alex Wan, Eric Wallace, Sheng Shen, and Dan Klein. Poisoning language models during instruction tuning. In International Conference on Machine Learning, 2023. | 2307.15217#117 | 2307.15217#119 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#119 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 33 Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Joseph Miller, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, and Stuart Russell. Adversarial policies beat professional-level go ais. arXiv preprint arXiv:2211.00241, 2022. Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey, 2023. Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. | 2307.15217#118 | 2307.15217#120 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#120 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483, 2023. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. Ethical and social risks of harm from language models, 2021. Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. Challenges in detoxifying language models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2447â | 2307.15217#119 | 2307.15217#121 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#121 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 2469, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10. 18653/v1/2021.findings-emnlp.210. URL https://aclanthology.org/2021.findings-emnlp.210. Jess Whittlestone, Kai Arulkumaran, and Matthew Crosby. The societal implications of deep reinforcement learning. Journal of Artificial Intelligence Research, 70:1003â 1030, 2021. Nils Wilde, Erdem Biyik, Dorsa Sadigh, and Stephen L Smith. | 2307.15217#120 | 2307.15217#122 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#122 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Learning reward functions from scale feedback. In Conference on Robot Learning, pages 353â 362. PMLR, 2022. Simon Willison. Prompt injection. 2023. URL https://simonwillison.net/series/prompt-injection/. Christian Wirth, Riad Akrour, Gerhard Neumann, Johannes Fürnkranz, et al. A survey of preference-based reinforcement learning methods. Journal of Machine Learning Research, 18(136):1â 46, 2017. Yotam Wolf, Noam Wies, Yoav Levine, and Amnon Shashua. | 2307.15217#121 | 2307.15217#123 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#123 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Fundamental limitations of alignment in large language models. arXiv preprint arXiv:2304.11082, 2023. Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. Recursively summarizing books with human feedback, 2021a. Xian Wu, Wenbo Guo, Hua Wei, and Xinyu Xing. Adversarial policy training against deep reinforcement learning. In USENIX Security Symposium, pages 1883â 1900, 2021b. Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, and Hannaneh Hajishirzi. | 2307.15217#122 | 2307.15217#124 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#124 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Fine-grained human feedback gives better rewards for language model training, 2023. Blake Wulfe, Logan Michael Ellis, Jean Mercat, Rowan Thomas McAllister, and Adrien Gaidon. Dynamics- aware comparison of learned reward functions. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=CALFyKVs87. Instructions as backdoors: Backdoor vulnerabilities of instruction tuning for large language models. arXiv preprint arXiv:2305.14710, 2023a. Wanqiao Xu, Shi Dong, Dilip Arumugam, and Benjamin Van Roy. Shattering the agent-environment interface for fine-tuning inclusive language models. arXiv preprint arXiv:2305.11455, 2023b. Tianpei Yang, Hongyao Tang, Chenjia Bai, Jinyi Liu, Jianye Hao, Zhaopeng Meng, Peng Liu, and arXiv preprint Zhen Wang. Exploration in deep reinforcement learning: a comprehensive survey. arXiv:2109.06668, 2021. | 2307.15217#123 | 2307.15217#125 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#125 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 34 Georgios N Yannakakis and John Hallam. Ranking vs. preference: a comparative study of self-reporting. In Affective Computing and Intelligent Interaction: 4th International Conference, ACII 2011, Memphis, TN, USA, October 9â 12, 2011, Proceedings, Part I 4, pages 437â 446. Springer, 2011. Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. Selfee: Iterative self-revising llm empowered by self-feedback generation, 2023. URL https://kaistai.github. io/SelFee/. Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao- Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, and Fei Xia. Language to rewards for robotic skill synthesis. Arxiv preprint arXiv:2306.08647, 2023. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears, 2023. Sheng Yue, Guanbo Wang, Wei Shao, Zhaofeng Zhang, Sen Lin, Ju Ren, and Junshan Zhang. | 2307.15217#124 | 2307.15217#126 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#126 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Clare: In The Eleventh Conservative model-based reward learning for offline inverse reinforcement learning. International Conference on Learning Representations, 2023. Jiliang Zhang and Chen Li. Adversarial examples: Opportunities and challenges. IEEE transactions on neural networks and learning systems, 31(7):2578â 2593, 2019. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534, 2023. Songyuan Zhang, Zhangjie Cao, Dorsa Sadigh, and Yanan Sui. Confidence-aware imitation learning from demonstrations with varying optimality. Advances in Neural Information Processing Systems, 34:12340â 12350, 2021. Zhibing Zhao, Peter Piech, and Lirong Xia. Learning mixtures of plackett-luce models. In International Conference on Machine Learning, pages 2906â 2914. PMLR, 2016. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. Li Zhou and Kevin Small. Inverse reinforcement learning with natural language goals. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11116â | 2307.15217#125 | 2307.15217#127 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#127 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 11124, 2021. Banghua Zhu, Jiantao Jiao, and Michael I Jordan. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. arXiv preprint arXiv:2301.11270, 2023. Simon Zhuang and Dylan Hadfield-Menell. Consequences of misaligned ai. Advances in Neural Information Processing Systems, 33:15763â 15773, 2020. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. | 2307.15217#126 | 2307.15217#128 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#128 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pages 1433â 1438. Chicago, IL, USA, 2008. Daniel Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Benjamin Weinstein-Raun, Daniel de Haas, et al. Adversarial training for high-stakes reliability. Advances in Neural Information Processing Systems, 35:9274â 9286, 2022. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Chris- arXiv preprint tiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv:1909.08593, 2019. | 2307.15217#127 | 2307.15217#129 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#129 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 35 # A An Improved Model of the Human Feedback Process As illustrated in Equation (1), the feedback process in RLHF is typically modeled with a single human H with internal reward function rH; examples sampled from the base model: xi â ¼ Ï Î¸; and feedback as a function of the human, example, and noise: yi = f (h, xi, ϵi). However, as discussed in Section 3, this is a misspecified model of the process: there is not a single human, humans values are not representable with a reward function, human actions are dependent on context, and the sampling process can involve a human. Thus we propose an alternative formulation. Let â H refer to a joint distribution of humans (or groups thereof if feedback is provided collaboratively) used for obtaining samples and feedback denoted as Hsample . A dataset of examples is sampled j from Ï Î¸ (or some other source) where each example xi is defined to be a batch of one or more generations from the base model. Importantly, xi may not contain all information about the world state (e.g., if xi is a 2D rendering of a 3D environment), and the human may be able to observe more than just the modelâ s output (e.g., if interpretability tools are used to aid in evaluation). So let v be a rendering function that maps Ï Î¸ and xi to what a human sees. The behavior of humans varies over time and in different contexts, so let csample represent particular contexts for sampling and feedback collection. Denote the i sampling process as s which maps the base model Ï Î¸, a human Hsample to some example xi. Notably, s could ignore the base model and generate offline samples from some other source. Finally, let f map a human Hfeedback to feedback yi. The data , rendered example v(Ï Î¸, xi), and context cfeedback collection process can thus be more completely modeled as: Hsample j , Hfeedback j â ¼ â H, xi â ¼ s(Ï Î¸, Hsample j , csample i ), yi = f (v(Ï Î¸, xi), Hfeedback j , cfeedback i ) (4) which highlights a need for future work to better account for the aspects of this process that are commonly not accounted for when training systems with RLHF. | 2307.15217#128 | 2307.15217#130 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#130 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | # B Rationale for Why Challenges Were Categorized as Tractable or Fundamental In Section 3, we categorize problems as tractable or fundamental. The key distinction between the two is that fundamental challenges are substantial enough that overcoming them would require a method that is no longer a form of RLHF. Although many of the fundamental problems we identify can be alleviated by improving how RLHF is approached, they could be fully addressed with RLHF. As a result, they should be either avoided by not using RLHF or compensated for by other safety measures. This distinction is soft, and some categories of challenges are marginal. Here, we briefly explain each categorization. # B.1 Problems from Section 3.1: Tractable: Selecting representative humans and getting them to provide quality feedback is difficult: This can be addressed by studying and improving the selection and training of evaluators. Tractable: Some evaluators have harmful biases and opinions: This can be addressed by studying and improving the selection and training of evaluators. Tractable: Individual human evaluators can poison data: This can be addressed with improved evaluator selection and quality assurance measures. Tractable: Humans make simple mistakes due to limited time, attention, or care: This is marginal because human mistakes can never fully be overcome. However, they can be addressed with improved working conditions and quality assurance procedures. Tractable: Partial observability limits human evaluators: Human evaluators can be provided with all information available in the policyâ s observations (although representing this in an easily-comprehensible way may be challenging). Fundamental: Humans cannot evaluate performance on difficult tasks well: Human intelligence and cognitive capacity are limited. Humans cannot be expected to properly evaluate the performance of | 2307.15217#129 | 2307.15217#131 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#131 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 36 superhuman models on complex tasks. Thus, solving this problem would require no longer using human feedback in the way that RLHF does. Fundamental: Humans can be misled, so their evaluations can be gamed: Human fallibility cannot fully be overcome, especially against optimization pressure from the learned policy. Tractable: Data collection can introduce harmful biases: This can be addressed with improved data curation. Fundamental: There is an inherent cost/quality tradeoff when collecting human feedback: This tradeoff is unavoidable in practice â obtaining diverse and high-quality examples (e.g. from long chatbot conversations) requires more effort. Fundamental: RLHF suffers from a tradeoff between the richness and efficiency of feedback types: This tradeoff is unavoidable for data collection in practice â richer annotations require more effort. # B.2 Problems from Section 3.2: Fundamental: An individual humanâ s values are difficult to represent with a reward function: This problem is marginal. It can be improved in practice by improved modeling, but RLHF-based solutions will be limited by the intractability of perfectly modeling context and troubles with the reward hypothesis (Skalse and Abate, 2022b; Bowling et al., 2023). Fundamental: A single reward function cannot represent a diverse society of humans: Trivial. Instead of being a fundamental limitation with RLHF, this is a broader limitation of AI alignment itself. Fundamental: Reward models can misgeneralize to be poor reward proxies, even from correctly-labeled training data: This problem is marginal because it can and should be addressed by improved sampling in practice. However, it is impossible to perfectly represent a distribution with infinite support from a finite sample. Additionally, the deployment distribution will always differ from the training and evaluation distributions in real-world settings (Christiano, 2019). Fundamental: Optimizing for an imperfect reward proxy leads to reward hacking: If a reward model is imperfect, reward hacking will always be a possibility from RL. Tractable: Evaluating reward models is difficult and expensive: This can be addressed by perform- ing thorough and expensive evaluations. # B.3 Problems from Section 3.3: Tractable: It is (still) challenging to optimize policies effectively: This can be addressed with advancements in RL methodology. Tractable: Policies tend to be adversarially exploitable: | 2307.15217#130 | 2307.15217#132 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#132 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | This problem is marginal because achieving certified adversarial robustness against practical threat models has empirically been intractable. Nonetheless, this can be addressed with robust optimization techniques. Fundamental: Policies can perform poorly in deployment even if rewards seen during training were perfectly correct: This problem is marginal because it can and should be addressed by improved sampling in practice. However, it is impossible to perfectly represent a distribution with infinite support from a finite sample. Additionally, the deployment distribution will always differ from the training and evaluation distributions in real-world settings Christiano (2019). Fundamental: Optimal RL agents tend to seek power: Power is instrumentally useful for agents. Tractable: The pretrained model introduces biases into policy optimization: This can be ad- dressed with improved base models. Tractable: RL contributes to mode collapse: This can be addressed with forms of RL that optimize for distribution-matching in desired instances. | 2307.15217#131 | 2307.15217#133 | 2307.15217 | [
"2305.20050"
]
|
2307.15217#133 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | 37 # B.4 Problems from Section 3.4: Tractable: Joint training induces distribution shifts: This can be mitigated with synchronous learning or other strategies. Tractable: It is difficult to balance efficiency and avoiding overfitting by the policy: This can be addressed with improved training methodology. 38 | 2307.15217#132 | 2307.15217 | [
"2305.20050"
]
|
|
2307.14984#0 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 3 2 0 2 t c O 9 1 ] I S . s c [ 2 v 4 8 9 4 1 . 7 0 3 2 : v i X r a # S3: Social-network Simulation System with Large Language Model-Empowered Agents Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li Department of Electronic Engineering, Tsinghua University [email protected] # Abstract | 2307.14984#1 | 2307.14984 | [
"2302.13971"
]
|
|
2307.14984#1 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena ex- planation, and policy-making support, among others. In this work, we harness the human-like capabilities of large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S3 system (short for Social network Simulation System). Adhering to the widely employed agent-based simulation paradigm, we employ fine-tuning and prompt engineering techniques to ensure that the agentâ s behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, at- titude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We an- ticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science. # Introduction The social network, comprising interconnected individuals in society, constitutes a cornerstone of the contemporary world. Diverging from mathematical analysis, computer simulation offers a fresh avenue to comprehend the formation and evolution of social networks. This serves as a fundamental tool for social scientists. Notably, in 1996, there was already a book titled Social Science Microsimulation [36] providing valuable insights about simulation from the perspective of social science. Social simulation encompasses a wide range of domains, encompassing both individual and population social activities. At the heart of social simulation lie two perspectives [14]: 1) the dynamic feedback or interaction among individuals, and 2) the states of the population, either as a collective whole or as distinct groups. By simulating social activities, researchers and practitioners can predict the future evolution of individual and population states. In addition, they facilitate experimental environments through interventions. Social simulation can be implemented in two forms: microlevel simulation [8, 28] and macrolevel simulation [18, 25, 13, 24]. | 2307.14984#0 | 2307.14984#2 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#2 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | In macrolevel simulation, also known as system-based simulation, researchers model the dynamics of the system using equations that elucidate the changing status of the population. Conversely, microlevel simulation, or agent-based simulation, involves researchers employing either human-crafted rules or parameterized models to depict the behavior of individuals (referred to as agents) who interact with others. Recently, with the exponential growth of the Internet, online social networks have emerged as the principal platform for societal activities. Users engage in various interactive behaviors such as chatting, posting, and sharing content. Consequently, the study of social networks has become a central research focus within the realm of social science, thereby emphasizing the criticality of simulation in this domain. | 2307.14984#1 | 2307.14984#3 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#3 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Large language models (LLMs) [6, 27, 9, 11, 35, 39] are the recent advancement in the field of deep learning, characterized by the utilization of an extensive array of neural layers. These models undergo training on vast textual corpora, acquiring a remarkable fundamental capacity to comprehend, generate, and manipulate human language. Given their impressive prowess in text comprehension, which closely approximates human-level performance, LLMs have emerged as a particularly auspicious avenue of research for approaching general artificial intelligence. Consequently, researchers [1, 17, 15, 28] leverage LLMs as agent-like entities for simulating human-like behavior, capitalizing on three fundamental capabilities. First and foremost, LLMs possess the ability to perceive and apprehend the world, albeit restricted to environments that can be adequately described in textual form. Secondly, LLMs are capable of devising and organizing task schedules by leveraging reasoning techniques that incorporate both task requirements and the attendant rewards. Throughout this process, LLMs effectively maintain and update a memory inventory, employing appropriately guided prompts rooted in human-like reasoning patterns. Lastly, LLMs exhibit the capacity to generate texts that bear a striking resemblance to human-produced language. These textual outputs can influence the environment and interact with other agents. Consequently, it holds significant promise to adopt an agent-based simulation paradigm that harnesses LLMs to simulate each user within a social network, thereby capturing their respective behaviors and the intricate interplay among users. In this study, we present the Social-network Simulation System (S3), which employs LLM-empowered agents to simulate users within a social network effectively. Initially, we establish an environment using real-world social network data. To ensure the authenticity of this environment, we propose a user-demographic inference module that combines prompt engineering with prompt tuning, to infer user demographics such as age, gender, and occupation. Within the constructed environment, users have the ability to observe content from individuals they follow, thereby influencing their own attitudes, emotions, and subsequent behaviors. Users can forward content, create new content, or remain inactive. Hence, at the individual level, we employ prompt engineering and prompt tuning methodologies to simulate attitudes, emotions, and behaviors. Notably, this simulation considers both demographics and memory of historically-posted content. | 2307.14984#2 | 2307.14984#4 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#4 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | At the population level, the accumulation of individual behaviors, including content generation and forwarding, alongside the evolving internal states of attitudes and emotions, leads to the emergence of collective behavior. This behavior encompasses the propagation of information, attitudes, and emotions. To assess the efficacy of the proposed S3 system, we have chosen two exemplary scenarios, namely, gender discrimination and nuclear energy. With respect to gender discrimination, our objective is to simulate user responses to online content associated with this issue, while closely observing the dissemination patterns of related information and evolving public sentiment. Regarding nuclear energy, our aim is to simulate user reactions to online content pertaining to power policies. In addition, we aim to simulate the contentious and conflicting interactions between two opposing population groups. To evaluate the precision of our simulations, we employ metrics that measure accuracy at both the individual and population levels. | 2307.14984#3 | 2307.14984#5 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#5 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | This workâ s main contributions can be summarized as follows. â ¢ We take the pioneering step of simulating social networks with large language models (LLMs), which follows the agent-based simulation paradigm, and empowers the agents with the latest advances. â ¢ We develop a simulation system that supports both individual-level and population-level simulations, which can learn from the collected real social network data, and simulate future states. â ¢ We systematically conduct the evaluation, and the results show that the simulation system with LLM-empowered agents can achieve considerable accuracy in multiple metrics. Consequently, our system introduces a novel simulation paradigm in social science research, offering extensive support for scientific investigations and real-world applications. To provide a comprehensive understanding of the current research landscape, we begin by reviewing relevant works in Section 2. Subsequently, we proceed to introduce the simulation system in Section 3, followed by a detailed exposition of the methodology and implementation in Section 4. | 2307.14984#4 | 2307.14984#6 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#6 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | In 2 Section 5, we engage in discussions and analyze open challenges associated with related research and applications. Finally, we conclude our work in Section 6. # 2 Related Works In this section, we discuss two areas close to this work, social simulation and large language model- based simulation. # 2.1 Social Simulation According to [5], "Simulation means driving a model of a system with suitable inputs and observing the corresponding outputs". Social simulation aims to simulate various social activities, which encompass a wide range of applications [14]. One primary advantage of social simulation is its potential to aid social scientists in comprehending the characteristics of the social world [2]. This is primarily attributed to the fact that the internal mechanisms driving social behaviors are not directly observable. By employing a simulation model capable of reasonably replicating the dynamic nature of historical social behaviors, it becomes feasible to utilize the simulation tool for predicting the future of the social system. Furthermore, social simulation can serve as a training ground, particularly for economists involved in social-economic simulations [34]. In this context, the economist can assume a digital persona, namely an artificial intelligence program tasked with formulating economic policies. Moreover, social simulation can even serve as a substitute for human presence, exemplified by the emergence of digital avatars in the metaverse [19]. From the perspective of social science research, social simulation plays a crucial role in facilitating the development of new social science theories. It achieves this by validating theoretical assumptions and enhancing theory through the application of more precise formalizations. In spite of the promising applications, conducting social simulation is complex. The earliest works use discrete event-based simulation [18] or system dynamics [25, 13, 24] with a series of equations to approximate multiple variables over time that partly describe the system. These early methods primarily focused on accurately predicting the variables rather than elucidating the underlying mechanisms or causal relationships. Subsequently, drawing inspiration from the rapid development and remarkable success of simulation in other scientific domains, the utilization of agent-based simulation emerged in the field of social simulation. A notable and representative technique among these simulation methods is the employment of Cellular Automata [8]. Initially, this approach establishes a social environment composed of numerous individuals and subsequently formulates a set of rules dictating how individuals interact with one another and update their states. Agent-based simulation can be regarded as a micro-level simulation that approximates real-world systems by describing the behavior of explicitly defined micro-level individuals. | 2307.14984#5 | 2307.14984#7 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#7 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Thus, it is also referred to as microsimulation. In recent times, owing to significant advancements in machine learning and artificial intelligence, agent-based simulation has witnessed a notable transformation. This transformation is characterized by the utilization of increasingly intricate and robust agents propelled by machine learning algorithms. These agents possess the ability to dynamically perceive their surroundings and exhibit actions that closely resemble human behavior. The rapid progress in simulating individual agents has not only preserved the effectiveness of conventional simulation paradigms but has also resulted in significant improvements. This is particularly important for large language models, which are on the path towards achieving partial general artificial intelligence. Consequently, in this study, we embrace the microsimulation paradigm and employ meticulously guided and finely tuned large language models to govern the behavior of individuals within social networks. # 2.2 Large Language Model-based Simulation Recently, relying on the strong power in understanding and generating human language, large language models such as GPT series [6, 27], PaLM series [9, 11], LLaMA [35], GLM [39], etc. are attracting widespread attention. LLMs have exhibited exceptional capabilities in zero-shot scenarios, enabling rapid adaptation to diverse tasks across academic and industrial domains. The expansive language model aligns well with the agent-based simulation paradigm mentioned earlier, wherein the primary objective involves | 2307.14984#6 | 2307.14984#8 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#8 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 3 Real Data => Demographic Memory v u v Environment Prompting Tuning Users Large Langue Model L_Messages__| Empowered Agent Network ie 1 8 8 8 = internal State â - 68-8 Emotion | | Attitude | | Others 8g 8 â nyronment ras ay Memory Update 4 ° Condition | â = Generation Module Update u Interactive Behaviors Social Event Mechanism Like Forward Comment Others Generate Figure 1: The overview of the social network simulation system. constructing an agent represented by a rule or program endowed with sufficient capacity to simulate real-world individuals. Aher et al. [1] conducted a preliminary test to find that LLMs possess the capability to reproduce some classic economic, psycholinguistic, and social psychology experiments. Horton et al. [17] substitute human participants with LLM agents, which are given endowments, information, preferences, etc., with prompts and then simulate the economic behaviors. The results with LLM-empowered agents show qualitatively similar results to the original papers (with human experiments) [30, 7]. Another study [15] adopts an LLM-based crowdsourcing approach by gathering feedback from LLM avatars representing actual humans, to support the research of computational social science. Recently, Part et al. [28] construct a virtual town with 25 LLM-empowered agents based on a video game environment, in which the agent can plan and schedule what to do in daily life. Although the simulation is purely based on a generative paradigm without any real-data evaluation, it provides insights that LLM can serve as a powerful tool in agent-based simulation. Each agent was assigned its own identity and distinct characteristics through prompts, facilitating communication among them. It is noteworthy that this simulation was conducted exclusively within a generative paradigm, without incorporating any real-world data for evaluation. Nevertheless, the findings offer valuable insights into LLMâ s potential as a potent tool in agent-based simulations. | 2307.14984#7 | 2307.14984#9 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#9 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | # 3 S3: Social Network Simulation # 3.1 System Overview Our system is constructed within a social network framework, wherein the agentâ s capabilities are augmented through the utilization of large language models. More specifically, our primary objective is to ensure that the simulation attains a significant degree of quantitative accuracy, catering to both individual-level and population-level simulations. Regarding individual-level simulation, our aim is to replicate behaviors, attitudes, and emotions by leveraging user characteristics, the informational context within social networks, and the intricate mechanisms governing user cognitive perception and decision-making. Through the utilization of agent-based simulation, we further assess the population- level dynamics by scrutinizing the performance of simulating three pivotal social phenomena: the propagation process of information, attitude, and emotion. | 2307.14984#8 | 2307.14984#10 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#10 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 4 Table 1: The utilized datasets for social network simulation. Scenario Gender Discrimination Nuclear Energy #Users 8,563 17,945 #Relations 25,656 77,435 #Posts 103,905 229,450 Demographics Age, Gender, Occupation Age, Gender, Occupation Purpose Information&Emotion Propagation Information&Attitude Propagation Table 2: Performance of our system on five prediction tasks for individual simulation. Scenario Gender Discrimination Nuclear Energy Prediction Task Emotion Level Event Propagation Initial Attitude Attitude Change Event Propagation Accuracy AUC F1-Score â 0.662 0.727 0.865 0.681 71.8% 66.2% 74.3% 83.9% 69.5% â 0.667 0.834 0.857 0.758 | 2307.14984#9 | 2307.14984#11 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#11 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | # 3.2 Social Network Environment In this study, our focus is directed toward two specific focal points, namely gender discrimination and nuclear energy. These particular subjects are chosen owing to their highly controversial nature, which yielded an extensive corpus of data. More specifically, our investigation regarding nuclear energy centers on examining the prevailing attitudes of the general public toward the choice between supporting nuclear energy sources or relying on fossil fuels. As for gender discrimination, our objective is to delve into the emotional experiences of individuals and populations, particularly those elicited by incidents of gender-based discrimination, such as feelings of anger. | 2307.14984#10 | 2307.14984#12 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#12 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | The availability of such copious amounts of data facilitates the extraction of a substantial portion of the authentic network, thereby enabling us to gain a macroscopic perspective that closely approximates reality. To conduct this analysis, we collect the real data with users, social connections, and textual posts in social media, as detailed in Table 1. This dataset provides us with the necessary resources to delve deep into the dynamics of these contentious subjects and gain valuable insights into their impact on social networks. User demographics play a pivotal role in shaping user behavior, necessitating the development of a more extensive user persona to enable the realistic and plausible simulation of their actions. However, due to the limited availability of user information obtained directly from social media, it becomes imperative to extract the missing user demographics from textual data, such as user posts and personal descriptions. Specifically, we capture user demographic features from textual information using LLM, with a particular emphasis on predicting Age, Gender, and Occupation. By integrating demographic attributes inferred from social network data, we are able to present an enhanced and more authentic representation of usersâ | 2307.14984#11 | 2307.14984#13 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#13 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | actions and interactions. # Individual-level Simulation Utilizing the initialized social network environment, the system commences the simulation at an individual level. Precisely, the user acquires awareness of the information environment, thereby influencing their emotions and attitude. Subsequently, the user is granted the option to forward (repost) observed posts, generate new content, or keep inactive. In essence, we conduct individual simulations encompassing three facets: emotion, attitude, and interaction behavior. # 3.3.1 Emotion Simulation In the process of disseminating real-world events, when a user with their own cognition, attitudes, and personality encounters an event, they are often triggered emotionally and express their emotions on social platforms. Emulating user emotions is crucial for social network simulations, as it significantly influences how users convey their intended messages. However, simulating emotions is challenging due to the multitude of factors and complex relationships involved in human emotions. Leveraging the rich knowledge of human behavior inherent in LLMs, we employ LLM to simulate individual emotions. | 2307.14984#12 | 2307.14984#14 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#14 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 5 # Table 3: Performance of our system on conditional text generation tasks. Scenario Gender Discrimination Nuclear Energy Perplexity Cosine Similarity 19.289 16.145 0.723 0.741 Specifically, we model the potential emotions of users towards a particular event as three levels: calm, moderate, and intense. Initially, when users are unaware of the event, their default emotion level is set to calm. However, as they become aware of the event, their emotional state begins to evolve. In order to capture this dynamic nature of emotions, we employ a Markov process. This process considers several factors, including the userâ s current emotion level, user profiles, user history, and the messages received at the present time step. By integrating these variables, we can predict the userâ s emotion level in the subsequent time step. Our emotion simulation approach has yielded promising results at the individual level. As shown in Table 2, using real-world data for evaluation, our method demonstrates good performance in predicting the emotions of the next time step. We achieve an accuracy of 71.8% in this three-classification task, thanks to the excellent modeling and understanding of human emotional expression by large language models. # 3.3.2 Attitude Simulation Just as emulating user emotions proves pivotal for social network simulations, simulating user attitudes carries equal weight. The reproduction of attitudes in a virtual social environment is complex yet indispensable. It is the combination of these attitudes that guide usersâ actions, opinions, and decisions about different topics. The challenge in this simulation lies in the multifaceted and subjective nature of attitudes, which are influenced by a wide range of internal and external factors, from individual experiences and beliefs to societal influences and perceived norms. For our simulation, we assume that users have initial attitudes towards specific issues, which change based on unfolding events. This dynamic adaptation of attitudes is reflective of real-world social interactions, where people modify their views in response to changing circumstances, influential figures, or compelling arguments. In our model, much akin to the emotional state, we track the usersâ attitudes on a binary spectrum, which consists only of negative and positive stances towards an event. Our first step is to establish an initial state for the userâ s attitude. This is derived from the user profiles and user history, reflecting their predispositions based on past interactions and behaviors. | 2307.14984#13 | 2307.14984#15 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#15 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Once the initial state is established, the dynamics of attitude changes are modeled as a Markov process. The subsequent evolution of these attitudes incorporates not only the userâ s current attitude but also their profile, history, and the messages received at the current time step. These factors are collectively employed to predict the userâ s attitude in the ensuing time step. Both the initial attitude and the assessment of attitude change are determined based on the LLM. As depicted in Table 2, our methods have demonstrated excellent performance. In the task of predicting initial attitudes, our approach yields an accuracy of 74.3%, an AUC score of 0.727, and an F1-Score of 0.667. In the subsequent task of attitude change prediction, our method performs even better, achieving an impressive accuracy of 83.9%, an AUC score of 0.865, and an F1-Score of 0.857. These results can be largely attributed to the ability of LLMs to profoundly comprehend human behavior and cognition. Such understanding enables a sophisticated interpretation of user-generated content, resulting in a more accurate prediction of usersâ attitudes and their evolution over time. # 3.3.3 Content-generation Behavior Simulation Within the realm of real-world social networks, users shape their content based on their prevailing attitudes and emotions towards distinct events. Emulating this content creation process is an essential, yet complex, aspect of social network simulations. Each piece of generated content acts as a mirror to the userâ s internal state and external influences, manifesting their individual perspective on the event at hand. The crux of the challenge is to encapsulate the wide array of expressions and styles that users employ to convey their sentiments, opinions, and reactions. | 2307.14984#14 | 2307.14984#16 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#16 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 6 Leveraging the strengths of LLMs can significantly alleviate this challenge. These models, with their ability to generate text that closely resembles human-like language patterns, facilitate the simulation of user-generated content with high accuracy. By inputting the userâ s profile, along with their current attitude or emotional state, these models are capable of generating content that faithfully reproduces what a user might post in response to a particular event. This approach, informed by the capabilities of large language models, enables us to craft a sophisti- cated simulation that mirrors the content generation process in real-world social networks. It thereby provides a nuanced understanding of how usersâ attitudes and emotions are reflected in their content, offering invaluable insights for the study of social dynamics. As can be seen in Table 2, our methods yield impressive results. In the Gender Discrimination scenario, we achieved a Perplexity score of 19.289 and an average cosine similarity of 0.723 when compared with the actual user-generated text. In the case of the Nuclear Energy scenario, these figures were even more impressive, with a Perplexity score of 16.145 and an average cosine similarity of 0.741. These results validate the effectiveness of our approach, where the LLMâ s profound comprehension of human cognition and behavior significantly contributes to accurately simulating user-generated content in social network simulations. Thus, our model serves as a powerful tool in understanding and predicting social dynamics in various contexts. # 3.3.4 Interactive Behavior Simulation During the simulation, upon receiving a message from one of their followees, the user is faced with a consequential decision: whether to engage in forwarding, posting new content or do nothing. Effectively modeling the decision-making process is important in simulating information propagation. Through our data-driven approach, we utilize Large Language Models (LLMs) to simulate usersâ interaction behavior by capturing the intricate relationship between users and contexts. The input is the information environment that the user senses, and the LLM-empowered agent make the decision by learning from the observed real data. | 2307.14984#15 | 2307.14984#17 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#17 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Our model has demonstrated commendable efficacy in this regard. In the scenario of Gender Discrimination, our model achieved an Accuracy of 66.2%, AUC of 0.662, and F1-Score of 0.667. Progressing to the Nuclear Energy context, the modelâ s performance remained robust, with an Accuracy of 69.5%, AUC of 0.681, and F1-Score of 0.758. These promising results not only attest to the LLMâ s capability in accurately simulating individual user behavior but also pave the way for exploring its potential at a larger scale. This accomplishment forms the basis for the population-level simulation, which we will delve into in the subsequent sections. # 3.4 Population-level Simulation In S3, we capture three forms of propagation, including the propagation of information, emotion, and attitude. Here information propagation focuses on the transmission of news that describes events in social environments. Emotion propagation emphasizes the social contagion of peopleâ s feelings toward specific events or topics. Attitude propagation describes that people exchange their attitudes or viewpoints in the social network. Subsequently, we shall expound upon our comprehensive capacity to simulate these three aforementioned forms of propagation. # 3.4.1 Information Propagation With the widespread adoption of digital media, the propagation of information experiences a signifi- cant acceleration [22, 23]. In the context of a simulation system designed to mimic social networks, one of its paramount functionalities lies in accurately modeling the process of information propagation and delineating crucial phase transitions [38, 26]. For example, Notarmuzi et al. [26] conducted extensive empirical studies on a large scale, successfully distilling the concepts of universality, criticality, and complexity associated with information propagation in social media. Meanwhile, Xie et al. [38] expanded upon the widely accepted percolation theory and skillfully captured the intricate phase transitions inherent in the spread of information on social media platforms. | 2307.14984#16 | 2307.14984#18 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#18 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 7 (a) True spread (b) Simulated spread (c) True emotion trend (d) Simulated trend emotion Figure 2: True spread, simulated spread, true emotion trend and simulated emotion trend of Chained Eight-child Mother Event. (a) True spread (b) Simulated spread (c) True change of attitudes (d) Simulated change of at- titudes Figure 3: True spread, simulated spread, true and simulated changes in proportion of positive attitudes towards nuclear energy during the Japan Nuclear Waste Water Release Event. Diverging from previous studies grounded in physical models, our approach adopts a LLM perspective to capture the dynamics of the information propagation process. In order to ascertain the efficacy of our proposed S3 model, we have selected two typical events: (i) Eight-child Mother Event and (ii) Japan Nuclear Wastewater Release Event. The former event came to public attention in late January 2022, encompassing a range of contentious issues, such as sexual assault and feminism. The latter event entails Japanâ s governmentâ s decision to release nuclear wastewater into the ocean, eliciting significant global scrutiny and interest. | 2307.14984#17 | 2307.14984#19 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#19 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Utilizing our simulator as a foundation, we employ a quantitative approach to evaluate the temporal dissemination of the aforementioned occurrence. This is achieved by calculating the overall number of people who have known the events at each time step (refer to Figure 2(b) and Figure 3(b)). Subsequently, through a comparative analysis with the empirical data (as illustrated in Figure 2(a) and Figure 3(a)), we discern that our simulator exhibits a commendable capacity for accurately forecasting the propagation patterns of both events. In particular, we notice that the rate of rise becomes gradually marginal over time, which can also be captured by our simulator. # 3.4.2 Emotion Propagation Another indispensable form of propagation is the transmission of emotion on social media [37, 32]. For example, Wang et al. [37] adopt the natural language processing techniques (BERT) and perform frequent global measurements of emotion states to gauge the impacts of pandemic and related policies. In S3, we utilize the state-of-the-art LLM to extract emotions from real-world data and simulate the emotional propagation among LLM-based agents. To examine whether the S3 simulator can also reproduce the emotion propagation process, we further simulate usersâ emotions expressed in the Eight-child Mother event. We extract the emotional density 8 from the textual interactions among agents. Comparing our simulation results (Figure 2(d)) and the empirical observations (Figure 2(c)), we find that our model can well capture the dynamic process of emotion propagation. Notably, we observe that there are two emotional peaks in the event. This suggests that if news of the event spreads more slowly across a larger community, a secondary peak in emotional intensity may occur. Based on the initialization obtained from real-world data, our model successfully reproduces these distinct peaks, thereby demonstrating the effectiveness of our proposed S3 system. # 3.4.3 Attitude Propagation | 2307.14984#18 | 2307.14984#20 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#20 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | One of todayâ s most concerning issues is the polarization and confrontation between populations with diverging attitudes toward controversial topics or events. Great efforts have been made to quantify real-world polarization [22, 12, 16] and simulate the polarization process using co-evolution model [31, 3, 4, 20]. In S3, we use LLM to simulate propagation attitudes and predict polarization patterns in social networks. Here we focus on the Japan Nuclear Wastewater Release Event, in which peopleâ s attitudes are polarized toward nuclear energy. As shown in Figure 3, we can observe that with the propagation of related information, positive attitudes toward nuclear energy decline rapidly, exhibiting a salient In our S3 model, though modeling repeated interactions among agents, we reproduce trough. the sudden decrease in positive attitudes and also capture their gradual increase. Overall, these observations suggest that our proposed model can not only simulate attitude propagation but also capture the critical dynamical patterns when situated in real-world scenarios. # 4 Architecture and Methodology # 4.1 Architecture Design In order to simulate the process of information propagation on the online social network, we have designed a message propagation simulation framework illustrated in Figure 1 and is explained in detail below. Environment Construction: The construction of the environment involves the formation of a social network on a public platform, comprising users and connections among them. For instance, users have the ability to establish mutual following relationships with their friends, or one-way following relationships with users they find interesting. Hence, the social network can be characterized as a directed graph, where the outdegree and indegree of nodes in the network represent the number of people they follow and the number of followers they possess, respectively. The users within this network can be broadly categorized into three groups: influential users, regular users, and low-impact users. Influential users typically exhibit a significantly larger number of followers compared to the number of people they follow. Moreover, they demonstrate a tendency to share high-quality original information. Regular users, on the other hand, typically maintain a balanced proportion of followers and followings. Additionally, a considerable portion of regular users engage in mutual following relationships, which often reflect their real-life friendships. Conversely, low-impact users exhibit limited followers, infrequent message posting, and typically represent the terminal points of message propagation chains. | 2307.14984#19 | 2307.14984#21 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#21 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | It is important to note that within this framework, we have excluded the consideration of social bots and zombie users, despite their prevalence on social platforms. User Characterization In addition to the social relationships present within the network, each user possesses their own attribute descriptions. Certain attributes are objective and specific, encompassing factors such as gender, occupation, and age. On the other hand, other attributes are more abstract, including their attitudes towards specific events and their prevailing emotional states. The former attributes tend to exhibit minimal fluctuations over short durations, whereas the latter attributes are more dynamic, particularly when users engage in information browsing on social platforms. In such cases, their fundamental attributes, message content, and message sources consistently shape their attitudes, emotions, and other abstract attributes. | 2307.14984#20 | 2307.14984#22 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#22 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | In light of the aforementioned descriptions, we also introduce a memory pool for each user. Given the abundance of messages from diverse users on online public platforms, a multitude of messages emerge daily. It is important to acknowledge that different messages exert varying influences on distinct users. To address this, we draw inspiration from [28] and propose the concept of influence factors. These factors calculate weighted scores 9 based on parameters such as posting time, content relevance, and message importance. By doing so, we ensure that the userâ s memory pool retains the most impactful messages, making them highly memorable. â ¢ Temporal Influence: The recency of messages plays a significant role in human memory, with previous messages gradually fading over time. A time score is ascribed to messages using a prescribed forgetting function. | 2307.14984#21 | 2307.14984#23 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#23 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | â ¢ Content Relevance: The relevance of message content is assessed with regard to the userâ s individual characteristics. Notably, younger individuals tend to exhibit a greater inclination towards entertainment-related events, whereas middle-aged individuals demonstrate heightened interest in political affairs. To quantify the degree of relevance, a relevance score is obtained by measuring the cosine similarity between a userâ s fundamental attributes and the content of the message. â ¢ Message Authenticity: The authenticity of messages is closely related to their sources. Messages are categorized based on their origins, encompassing messages disseminated by unidirectional followers, messages shared by mutual followers, messages recommended by the platform, and messages previously posted by the user themselves. Distinct scores are assigned to messages based on their respective sources. Update and Evolution Mechanism: During a social gathering, various official accounts and individ- ual users contribute posts concerning the event, encompassing news reports and personal viewpoints. Upon encountering these messages, the users who follow them manifest diverse emotional responses. Some users may even formulate their own stances on contentious matters, either in support or op- position, subsequently engaging in online activities such as endorsing, disseminating, and creating original message. In this simulation, we employ large language models to replicate individual users, leveraging their profiles and memory pools as prompts to generate cognitive reactions and behavioral responses. Subsequently, their abstract attributes and memory pools undergo updates. Following the modification of a userâ s memory pool, these messages disseminate and exert influence on their followers while they peruse the content. This iterative process persists, emulating the propagation of messages and the evolution of individualsâ cognitive states. # 4.2 Initialization # 4.2.1 Social Network Construction Within the scope of this study, we propose an initialization approach to construct a network utilizing data acquired from real-world social media sources (refer to Table 1). Strict adherence to privacy regulations and policies is maintained throughout the collection of social media data. Our approach leverages keyword-matching techniques to effectively extract posts relevant to the simulated scenarios. Subsequently, we delve into the identification of the authors and extract them as the foundational nodes of our network. Expanding beyond the individual level, we meticulously gather socially connected users. To establish connections between users, directed edges are established if the corresponding followee exists within the extracted user set. To optimize simulation efficiency, in this work, we focus solely on this sub-graph rather than the entire graph which is too large. | 2307.14984#22 | 2307.14984#24 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#24 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | During the simulation, the dissemination of messages occurs exclusively between source nodes and their corresponding target nodes. # 4.2.2 User Demographics Prediction Expanding upon the properties of the node, specifically focusing on user demographic attributes, emerges as a pivotal stride in our endeavor towards a more exhaustive simulation. Through the incorporation of additional information regarding the users into the system, we can delve into and scrutinize their behaviors, interactions, and influence within the network, more effectively. User demographic attributes allow us to capture heterogeneity and diversity in real-world social networks. That is, demographic attributes play a significant role in shaping individual behaviors and preferences, which, in turn, influence the networkâ s overall attitude dynamics. In our study, we chose gender, age, and occupation as the major demographic attributes. As social media data does not directly offer attributes such as gender, age, and occupation, we rely on prediction techniques to estimate these attributes. Leveraging LLMs provides a robust approach to predicting these demographic attributes. By utilizing LLMs, we can leverage the extensive contextual understanding and knowledge encoded | 2307.14984#23 | 2307.14984#25 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#25 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 10 Table 5: Ten occupations. Table 4: Prediction performance of gender and age. Demographic Performance Gender Acc 0.710 F1 0.667 AUC 0.708 Age MSE MAE Avg % Error 128.0 7.53 21.50 1 2 3 4 5 6 7 Medical Personnel 8 9 Media Personnel 10 Education Practitioner Administrative Manager / Officer Unemployed / Student Engineer Labor Technician / Worker Logistics Practitioner # Entertainment and Arts Practitioner | 2307.14984#24 | 2307.14984#26 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#26 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | within the models to infer user demographics based on available information, such as personal descriptions and content within posts. The technical details are as follows. User Demographics Prediction with LLM. In order to predict user gender based on personal descriptions, since the collected data lacks sufficient labels, we use a public dataset released in [29, 40] for assistance. It allows us to extract a vast array of labeled gender and personal description relationships. We filter out data with longer than 10 words in this dataset served as the ground truth to tune the language model. Specifically, we use ChatGLM [10] as the foundation model and employ the P-Tuning-v2 [21] methodology. We feed the model with the personal description as a prompt and let the model determine the most probable gender associated with the given description. To predict age using usersâ posts, we use Blog Authorship Corpus Dataset [33] dataset to establish the expression-to-age relationship. This dataset provides us with author-age labels for corresponding textual posts. We randomly select the historical blogs in [33] and add them to the prompt as input; then, the age can be used as the label for prefix tuning. The tuned large language model can be used to predict the age label in our collected social media dataset. Next, we predict occupations only using pre-trained LLMs. In this scenario, we directly feed usersâ posts and personal profile descriptions to the LLM for prediction. By examining the content of these inputs, the model showcased its capacity to comprehend and infer usersâ occupations, further enhancing our demographic prediction capabilities. # Prediction Result Evaluation The outcomes of our age and gender prediction analysis are presented in Table 4. Our gender predictor, which relies on a fine-tuned Large Language Model (LLM), achieves satisfactory results. Despite the absence of explicit gender information in all personal descriptions, the predictor successfully generates valid predictions. Moving on to age, we select English blogs from [33] and ensured similar age distribution across the training and testing process. The results show that the mean squared error (MSE) was 128, while the mean absolute error (MAE) was around 7.53. These values indicate a 21.5% unified percentage error (see Table 4). | 2307.14984#25 | 2307.14984#27 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#27 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | As for the occupations, we initially include the posts and personal descriptions of the combined user dataset in the prompt. We then feed the prompt to pre-trained ChatGLM to obtain the occupation of each user. We leave the supervised fine-tuning for occupation prediction as future work. It results in a total of 1,016 different occupations being identified from all users. However, utilizing all occupations is not essential since some occupations are very close. Thus, we group all occupations into 10 distinct occupation categories using the LLM, of which the categories can be found in Table 5. By condensing the number of occupations into a smaller set, we are able to simplify the simulation. | 2307.14984#26 | 2307.14984#28 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#28 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | # 4.3 Emotion and Attitude Simulation In our emotion simulation model, we adopt a Markov chain approach to capture the dynamic process of emotional changes triggered by a user receiving a message. The simulation involves four essential inputs: user demographics, current emotion, the received post. Emotions are classified into three distinct stages: calm, moderate, and intense. User demographics serve as supplementary information 11 LLMs, providing a reference point to contextualize emotional responses. The current emotion represents the userâ s emotional status before receiving the post, while the received post acts as the actuator for prompting the LLM to determine a new emotional status. To regulate the decrease of emotional states over time, we introduce the decaying coefficient, a hyper-parameter that controls the decay rate of emotions. Our hypothesis assumes that emotions tend to diminish gradually as time passes, influencing the emotion simulation process. Throughout this intricate mechanism, we impart these details by prompt to the LLMs, which are responsible for deciding whether the emotional state should change in response to the received post. We are trying to reduce as much manual intervention as possible, to highlight the capability of LLMs in simulating emotional changes by posts. The attitude simulation is similar to the emotion simulation. # 4.4 Behavior Simulation # 4.4.1 Content-generation Behavior In our social network simulation model, we incorporate an advanced approach utilizing Large Language Models (LLMs) to reproduce the dynamic process of content creation, shaped by usersâ | 2307.14984#27 | 2307.14984#29 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#29 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | emotions and attitudes towards specific events. The simulation hinges on two vital inputs: user profile information, and their current emotional or attitudinal state towards the event. Each piece of generated content is an embodiment of a userâ s internal state and external influences, reflecting their unique perspective. User profile information serves as a reference point for the LLMs, furnishing essential context to shape content responses. The current emotional or attitudinal state symbolizes the userâ s mindset when reacting to the event, thereby playing a vital role in the LLMâ s generation of potential responses. Underpinning this sophisticated mechanism is the profound cognitive and behavioral comprehension of LLMs. The LLM is prompted with these details and is then responsible for deciding how the content should be shaped in response to the event. Our aim is to minimize manual intervention as much as possible, to highlight the capability of LLMs in simulating authentic user-generated content. The approach mirrors the way real-world users form their posts in response to distinct events, aligning the text generation process with the emotional or attitudinal dynamics of users. In this manner, we have been successful in utilizing LLMs to emulate the content creation process on social networks with high fidelity. # 4.4.2 Interaction Behavior During the simulation, when a user receives a message from one of their followees, a critical decision needs to be madeâ whether to repost/post or not. That is to say, the interaction behavior includes reposting (forwarding) the original content and posting new content about the same social event. | 2307.14984#28 | 2307.14984#30 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#30 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | The userâ s interaction behavior plays a pivotal role in propagating messages to the userâ s followers, facilitating the spread of information within the social network. However, modeling the complex mechanisms governing a userâ s interaction behavior poses significant challenges. To address it, we employ large language models to capture the intricate relationship between the user, post features, and interaction behavior. Specifically, to leverage the ability of LLMs to simulate a real userâ s interaction behavior, we prompt the model with information regarding the userâ s demographic properties, i.e. gender, age, and occupation, in addition to the specific posts received, letting the LLM think like the user and make its decision. By such means, we enable LLM to make predictions regarding the userâ s inclination to repost the message or post new content. To summarize, by employing the above approach, we can effectively harness the power of LLMs to predict usersâ interaction behavior, taking into account various user and post features. # 4.5 Other Implementation Details The system employs various techniques for utilizing or adapting large language models to the agent- based simulation. For prompting-driven methods, we use either GPT-3.5 API provided by OpenAI1 or # 1https://platform.openai.com/overview | 2307.14984#29 | 2307.14984#31 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#31 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 12 a ChatGLM-6B model [10]. For fine-tuning methods, we conduct the tuning based on the open-source ChatGLM model. # 5 Discussions and Open Problems The S3 system, which has been developed, represents an initial endeavor aimed at harnessing the capabilities of large language models. This is to facilitate simulation within the domain of social science. In light of this, our analysis delves further into its application and limitations, along with promising future improvements. # 5.1 Application of S3 System Leveraging the powerful capabilities of large language models, this system excels in agent-based simulation. The system has the following applications in the field of social science. | 2307.14984#30 | 2307.14984#32 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#32 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | â ¢ Prediction. Prediction is the most fundamental ability of agent-based simulation. Large language model-based simulation can be utilized to predict social phenomena, trends, and individual behav- iors with historically collected data. For example, in economics, language models can help forecast market trends, predict consumer behavior, or estimate the impact of policy changes. In sociology, these models can aid in predicting social movements, public opinion shifts, or the adoption of new cultural practices. â ¢ Reasoning and explanation. During the simulation, each agent can be easily configured, and thus the system can facilitate reasoning and explanation in social science by generating phenomena with different configurations. Comparing the simulation results can provide explain the cause of the specific phenomena. Furthermore, the agent can be observed by prompts which can reflect how a human takes actions in the social environment. | 2307.14984#31 | 2307.14984#33 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#33 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | â ¢ Pattern discovery and theory construction. With repeated simulation during the extremely less cost compared with real data collection, the simulation process can reveal some patterns of the social network. By uncovering patterns, these models can contribute to the development of new theories and insights. Furthermore, researchers can configure all the agents and the social network environment, based on an assumption or theory, and observe the simulation results. Testing the simulation results can help validate whether the proposed assumption or theory is correct or not. â ¢ Policy making. The simulation can inform evidence-based policy-making by simulating and evaluating the potential outcomes of different policy interventions. It can assess the impact of policy changes on various social factors, including individual agents and the social environment. For example, in public health, it can simulate the spread of infectious diseases to evaluate the effectiveness of different intervention strategies. In urban planning, it can simulate the impact of transportation policies on traffic congestion or air pollution, by affecting how the users select public transportation. By generating simulations, these models can aid policymakers in making informed decisions. # Improvement on Individual-level Simulation The current design of individual simulation still has several limitations requiring further improvement. First, the agent requires more prior knowledge of user behavior, including how real humankind senses the social environment and makes decisions. In other words, the simulation should encompass an understanding and integration of intricate contextual elements that exert influence on human behavior. Second, while prior knowledge of user behavior is essential, simulations also need to consider the broader context in which decisions are made. This includes factors such as historical events, social conditions, and personal experiences. | 2307.14984#32 | 2307.14984#34 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#34 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | By enhancing the agentâ s capacity to perceive and interpret contextual cues, more precise simulations can be achieved. # Improvement on Population-level Simulation First, it is better to combine agent-based simulation with system dynamics-based methods. 13 Agent-based simulation focuses on modeling individual entities and their interactions, while system dynamics focuses on modeling the behavior of the social complex system as a whole. Through the fusion of these two methodologies, we can develop simulations of heightened comprehensiveness, encompassing both micro-level interactions and macro-level systemic behavior. This integration can provide a more accurate representation of population dynamics, including the impact of individual decisions on the overall system. Second, we can consider a broader range of social phenomena. This involves modeling various societal, economic, and cultural factors that influence human behavior and interactions. Examples of social phenomena to consider include social networks, opinion dynamics, cultural diffusion, income inequality, and infectious disease spread. By incorporating these phenomena into the simulation, we can better validate the systemâ s effectiveness and also gain more insights into social simulation. # Improvement on System Architecture Design First, we can consider incorporating other channels for social event information. It is essential to acknowledge that social-connected users are not the sole providers of information for individuals within social networks. Consequently, the integration of supplementary data sources has the potential to enrich the individual simulation. For instance, recommender systems can be integrated to gather diverse information about social events. This integration can help capture a wider range of perspectives and increase the realism of the simulation. Second, the system architecture should consider improving efficiency, which is essential for running large-scale simulations effectively. Optimizing the system architecture and computational processes can significantly enhance the performance and speed of simulations. To this end, techniques such as parallel computing, distributed computing, and algorithmic optimizations can be employed to reduce computational complexity and advance the efficiency of simulation runs. This allows for faster and more extensive exploration of scenarios, thereby enabling researchers to gain insights faster. Third, it is essential to add an interface for policy intervention. Including an interface that allows policymakers to interact with the simulation can be beneficial. This interface would enable policy- makers to input and test various interventions and policies in a controlled environment. By simulating the potential outcomes of different policy decisions, policymakers can make more informed choices. They can also evaluate the potential impact of their interventions on the simulated population. This feature can facilitate evidence-based decision-making and identify effective strategies. | 2307.14984#33 | 2307.14984#35 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#35 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | # 6 Conclusion In this paper, we present the S3 system (Social Network Simulation System) as a novel approach aimed at tackling the complexities of social network simulation. By harnessing the advanced capabilities of large language models (LLMs) in the realms of perception, cognition, and behavior, we have established a framework for social network emulation. Our simulations concentrate on three pivotal facets: emotion, attitude, and interactive behaviors. This research marks a significant stride forward in social network simulation, pioneering the integration of LLM-empowered agents. Beyond social science, our work possesses the potential to stimulate the development of simulation systems across diverse domains. Employing this methodology enables researchers and policymakers to attain profound insights into intricate social dynamics, thereby facilitating informed decision-making and effectively addressing various societal challenges. | 2307.14984#34 | 2307.14984#36 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#36 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | # References [1] Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. Using large language models to simulate multiple humans and replicate human subject studies. In International Conference on Machine Learning, pages 337â 371. PMLR, 2023. [2] Robert Axelrod. Advancing the art of simulation in the social sciences. In Simulating social phenomena, pages 21â 40. Springer, 1997. [3] Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. Modeling echo chambers and polarization dynamics in social networks. Physical Review Letters, 124(4):048301, 2020. | 2307.14984#35 | 2307.14984#37 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#37 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 14 [4] Fabian Baumann, Philipp Lorenz-Spreen, Igor M Sokolov, and Michele Starnini. Emer- gence of polarized ideological opinions in multidimensional topic spaces. Physical Review X, 11(1):011012, 2021. [5] Paul Bratley, Bennett L Fox, and Linus E Schrage. A guide to simulation, 1987. [6] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. | 2307.14984#36 | 2307.14984#38 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#38 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. [7] Gary Charness and Matthew Rabin. Understanding social preferences with simple tests. The quarterly journal of economics, 117(3):817â 869, 2002. [8] Bastien Chopard and Michel Droz. Cellular automata. Modelling of Physical, pages 6â 13, 1998. [9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. | 2307.14984#37 | 2307.14984#39 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#39 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. [10] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320â 335, Dublin, Ireland, May 2022. Association for Computational Linguistics. [11] Rohan Anil et al. Palm 2 technical report, 2023. [12] James Flamino, Alessandro Galeazzi, Stuart Feldman, Michael W Macy, Brendan Cross, Zhenkun Zhou, Matteo Serafino, Alexandre Bovet, Hernán A Makse, and Boleslaw K Szyman- ski. | 2307.14984#38 | 2307.14984#40 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#40 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Political polarization of news media and influencers on twitter in the 2016 and 2020 us presidential elections. Nature Human Behaviour, pages 1â 13, 2023. [13] Jay W Forrester. System dynamics and the lessons of 35 years. In A systems-based approach to policymaking, pages 199â 240. Springer, 1993. [14] Nigel Gilbert and Klaus Troitzsch. Simulation for the social scientist. McGraw-Hill Education (UK), 2005. | 2307.14984#39 | 2307.14984#41 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#41 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | [15] Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. Evaluating large language models in generating synthetic hci research data: a case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1â 19, 2023. [16] Marilena Hohmann, Karel Devriendt, and Michele Coscia. Quantifying ideological polarization on a network using generalized euclidean distance. Science Advances, 9(9):eabq2044, 2023. | 2307.14984#40 | 2307.14984#42 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#42 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | [17] John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023. [18] Peter Kolesar and Warren E Walker. A simulation model of police patrol operations: program description. 1975. [19] Lik-Hang Lee, Tristan Braud, Pengyuan Zhou, Lin Wang, Dianlei Xu, Zijun Lin, Abhishek Kumar, Carlos Bermejo, and Pan Hui. | 2307.14984#41 | 2307.14984#43 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#43 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda. arXiv preprint arXiv:2110.05352, 2021. [20] Jiazhen Liu, Shengda Huang, Nathaniel M Aden, Neil F Johnson, and Chaoming Song. Emer- gence of polarization in coevolving networks. Physical Review Letters, 130(3):037401, 2023. [21] Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P- tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. | 2307.14984#42 | 2307.14984#44 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#44 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61â 68, Dublin, Ireland, May 2022. Association for Computational Linguistics. [22] Philipp Lorenz-Spreen, Lisa Oswald, Stephan Lewandowsky, and Ralph Hertwig. A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature human behaviour, 7(1):74â 101, 2023. [23] Stefan Luding. Information propagation. Nature, 435(7039):159â 160, 2005. [24] Lawrence C Marsh and Meredith Scovill. | 2307.14984#43 | 2307.14984#45 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#45 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Using system dynamics to model the social security system. In NBER Workshop on Policy Analysis with Social Security Research Files, pages 15â 17, 1978. 15 [25] Dennis L Meadows, William W Behrens, Donella H Meadows, Roger F Naill, Jørgen Randers, and Erich Zahn. Dynamics of growth in a finite world. Wright-Allen Press Cambridge, MA, 1974. [26] Daniele Notarmuzi, Claudio Castellano, Alessandro Flammini, Dario Mazzilli, and Filippo Radicchi. | 2307.14984#44 | 2307.14984#46 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#46 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Universality, criticality and complexity of information propagation in social media. Nature communications, 13(1):1308, 2022. [27] OpenAI. Gpt-4 technical report, 2023. [28] Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. [29] Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD â | 2307.14984#45 | 2307.14984#47 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#47 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | 18, page 2110â 2119, New York, NY, USA, 2018. Association for Computing Machinery. [30] William Samuelson and Richard Zeckhauser. Status quo bias in decision making. Journal of risk and uncertainty, 1:7â 59, 1988. [31] Fernando P Santos, Yphtach Lelkes, and Simon A Levin. Link recommendation algorithms and dynamics of polarization in online social networks. Proceedings of the National Academy of Sciences, 118(50):e2102141118, 2021. [32] Joseph A Schafer. | 2307.14984#46 | 2307.14984#48 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#48 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Spinning the web of hate: Web-based hate propagation by extremist organizations. Journal of Criminal Justice and Popular Culture, 2002. [33] Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199â 205, 2006. [34] Peter D Spencer. The effect of oil discoveries on the british economyâ | 2307.14984#47 | 2307.14984#49 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#49 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | theoretical ambiguities and the consistent expectations simulation approach. The Economic Journal, 94(375):633â 644, 1984. [35] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2307.14984#48 | 2307.14984#50 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#50 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | [36] Klaus G Troitzsch. Social science microsimulation. Springer Science & Business Media, 1996. [37] Jianghao Wang, Yichun Fan, Juan Palacios, Yuchen Chai, Nicolas Guetta-Jeanrenaud, Nick Obradovich, Chenghu Zhou, and Siqi Zheng. Global evidence of expressed sentiment alterations during the covid-19 pandemic. Nature Human Behaviour, 6(3):349â 358, 2022. [38] Jiarong Xie, Fanhui Meng, Jiachen Sun, Xiao Ma, Gang Yan, and Yanqing Hu. Detecting and modelling real percolation and phase transitions of information on social media. Nature Human Behaviour, 5(9):1161â 1168, 2021. [39] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022. [40] Jing Zhang, Jie Tang, Juanzi Li, Yang Liu, and Chunxiao Xing. | 2307.14984#49 | 2307.14984#51 | 2307.14984 | [
"2302.13971"
]
|
2307.14984#51 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Who influenced you? predicting retweet via social influence locality. ACM Trans. Knowl. Discov. Data, 9(3), apr 2015. 16 | 2307.14984#50 | 2307.14984 | [
"2302.13971"
]
|
|
2307.14430#0 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | 3 2 0 2 l u J 6 2 ] L C . s c [ 1 v 0 3 4 4 1 . 7 0 3 2 : v i X r a # Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models # Mayee F. Chen*1 Nicholas Roberts2 Kush Bhatia1 Jue Wang3 Ce Zhang3, 4 Frederic Sala2 Christopher Ré1 1Department of Computer Science, Stanford University 2Department of Computer Sciences, University of Wisconsin-Madison 3Together AI 4Department of Computer Science, University of Chicago | 2307.14430#1 | 2307.14430 | [
"2101.00027"
]
|
|
2307.14430#1 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | # July 28, 2023 # Abstract The quality of training data impacts the performance of pre-trained large language models (LMs). Given a ï¬ xed budget of tokens, we study how to best select data that leads to good downstream model performance across tasks. We develop a new framework based on a simple hypothesis: just as humans acquire interdependent skills in a deliberate order, language models also follow a natural order when learning a set of skills from their training data. If such an order exists, it can be utilized for improved understanding of LMs and for data-efï¬ cient training. Using this intuition, our framework formalizes the notion of a skill and of an ordered set of skills in terms of the associated data. First, using both synthetic and real data, we demonstrate that these ordered skill sets exist, and that their existence enables more advanced skills to be learned with less data when we train on their prerequisite skills. Second, using our proposed framework, we introduce an online data sampling algorithm, SKILL-IT, over mixtures of skills for both continual pre-training and ï¬ ne-tuning regimes, where the objective is to efï¬ ciently learn multiple skills in the former and an individual skill in the latter. On the LEGO synthetic in the continual pre-training setting, SKILL-IT obtains 36.5 points higher accuracy than random sampling. On the Natural Instructions dataset in the ï¬ ne-tuning setting, SKILL-IT reduces the validation loss on the target skill by 13.6% versus training on data associated with the target skill itself. We apply our skills framework on the recent RedPajama dataset to continually pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation Harness with 1B tokens than the baseline approach of sampling uniformly over data sources with 3B tokens. | 2307.14430#0 | 2307.14430#2 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#2 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | # Introduction Large language models (LMs) exhibit remarkable capabilities, including producing creative content [55], writing source code [8], or chatting with users [7]. A key ingredient in enabling models to perform such tasks is the data on which the models are trained [17, 19, 59]. A natural way to unlock particular capabilities is to improve this training data. However, it is unclear how to select data from a large corpus for these capabilities given a ï¬ xed budget of training tokens, as data selection methods for current state-of-the-art LMs mostly rely on heuristics for ï¬ ltering and mixing together different datasets [32, 59]. We lack a formal framework for capturing how data inï¬ uences the modelâ s capabilities and how to utilize this data effectively for improving LM performance. To develop such a framework, we take inspiration from how humans acquire knowledge. A classic idea in education literature is the concept of skills that form a learning hierarchy [65]. For example, one study found that students learned mathematical and scientiï¬ c skills most quickly when these skills were presented in a particular order [11]. We seek to understand the extent that similar skill-based orderings characterize LM training. Such orderings, if they exist, may provide a better understanding of LMs as well as a mechanism for data-efï¬ cient training. For instance, to train an LM for Spanish question generation, we wish to know if training ï¬ rst on related but simpler tasks, such as Spanish grammar and English question generation, helps. We study if the idea of skill orderings can help us build a framework that relates data to LM training and behavior. This requires addressing two challenges revolving around the connection between skills and data. First, in order to show that there exist sets of skills that the LM learns most efï¬ ciently in some particular order, an operational deï¬ nition of LM skill and skill ordering must be developed and validated on data. In initial experiments, we investigated if semantic groupings of data, such as metadata attributes or embedding clusters, were sufï¬ cient to represent a skill and characterize how models learn. For *Corresponding author: [email protected]. 1 | 2307.14430#1 | 2307.14430#3 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#3 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | English QA og =e SpanishYQA} \English QG Ze as ee $ Spanish QG Ve ya Data Ordered skill set a, i | â EE Figure 1: Inspired by how humans acquire knowledge, we hypothesize that LMs best learn skills in a particular order and that this can help improve our understanding and training of LMs. We show that these ordered skill sets exist in real data, which enables skills to be learned with less data given that we train on their prerequisite skills. We then propose SKILL-IT, an online data selection algorithm that learns skills quickly by exploiting their ordering. instance, we partitioned the Alpaca dataset [56] by instruction typeâ a technique used to capture dataset diversity [62]â but we found that sampling based on instruction types and random sampling resulted in similar model performance, suggesting that not just any existing notion of data groups can characterize skills. | 2307.14430#2 | 2307.14430#4 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#4 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | Second, these deï¬ nitions of skills must be used to construct sampling distributions to actually improve model training. To develop criteria for a data selection algorithm that learns skills efï¬ ciently, we identify challenges that naive selection approaches face. The standard approach of random uniform sampling over data fails to learn skills optimally due to not accounting for skill imbalance and ordering. Skills can be distributed unevenly in the data, with more complex skills being rareâ for instance, Spanish and question generation (QG) are 5% and 4% of the Natural Instructions dataset [63], respectively, but Spanish QG is only 0.2%. Random sampling also provides no mechanism for taking into account a particular training order and dependency structure on skills. More sophisticated techniques like curriculum learning account for sample-level ordering, but not skills or their dependencies. Our goal framework must account for these issues of imbalance and ordering. Skill-based framework We deï¬ ne a skill as a unit of behavior that a model can learn using an associated slice of data (Deï¬ nition 2.1). An ordered skill set is a collection of skills with a directed skills graph that is neither complete nor empty, where an edge from a prerequisite skill to a skill exists if the amount of training it takes to learn the skill can be reduced if the prerequisite skill is also learned (Deï¬ nition 2.2, Figure 1 left, center). We show that ordered skill sets exist in synthetic and real datasets using this operational deï¬ nition. Interestingly, the existence of these ordered skill sets unveils that one can learn a skill quickly not by training solely on that skill, but on a mixture of that skill and prerequisite skills. For instance, in Figure 3 we observe that Spanish QG can be learned more efï¬ ciently when the model also learns English QG and Spanishâ we can achieve 4% lower validation loss than training on only Spanish QG over a ï¬ xed budget of overall training steps. Next, given an ordered skill set to train on, we use our framework to propose methods for how to select data so that the LM learn skills faster: skill-stratiï¬ ed sampling and an online generalization, SKILL-IT. We address the issue of unevenly distributed skills in datasets by proposing skill-stratiï¬ | 2307.14430#3 | 2307.14430#5 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#5 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | ed sampling, a simple approach that allows us to explicitly optimize for learning skills by uniformly sampling relevant skills (such as a target skill and its prerequisite skills in ï¬ ne-tuning). Skill-stratiï¬ ed sampling uses the construction of the ordered skill set but is static, which does not incorporate the ordering as training proceeds and results in oversampling skills that may be already learned early on in training. We address this issue by proposing an online data selection algorithm, SKILL-IT, for selecting mixtures of training skills that allocates more weight towards learning skills that are not yet learned or towards prerequisite inï¬ uential skills (Figure 1 right). SKILL-IT is derived from an online optimization problem over the training skills for minimizing loss on a set of evaluation skills given a ï¬ xed budget of data and the skills graph. SKILL-IT is inspired by online mirror descent and can be adapted for continual pre-training, ï¬ ne-tuning, or out-of-domain evaluation depending on the relationship between the evaluation skill set and the training skill set. We evaluate SKILL-IT on synthetic and real datasets at two model scales, 125M and 1.3B parameters. For the continual pre-training setting, we show on the LEGO synthetic [72] that we obtain a 35.8 point improvement in accuracy over randomly selecting training data and curriculum learning [3]. | 2307.14430#4 | 2307.14430#6 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#6 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | For the ï¬ ne-tuning setting, we show that on the widely-used Natural Instructions dataset [40, 64], our algorithm over a mixture of skills is able to achieve up to 13.6% lower loss on that skill than solely training on that skill, given the same overall training budget. For the out-of-domain setting when our 2 Alpaca Pile of Law Natural Instructions -2.0 1.5 0.8 1.5 0.6 1.0 1.0 0.4 0.5 0.5 0.2 0.0 0.0 0.0 Alpaca -2.0 1.5 1.0 0.5 0.0 Pile of Law 1.5 1.0 0.5 0.0 Natural Instructions 0.8 0.6 0.4 0.2 0.0 Figure 2: Heatmaps of adjacency matrices we compute for skill graphs for Alpaca, Pile of Law, and Natural Instructions. Negative elements and diagonals are thresholded to 0 for clarity. See Appendix C.2 for descriptions of how they were constructed and larger versions. training skills do not align perfectly with evaluation skills, our algorithm is able to achieve the lowest loss on 11 out of 12 evaluation skills corresponding to task categories in the Natural Instructions test tasks dataset over random and skill-stratiï¬ ed sampling on the training data. | 2307.14430#5 | 2307.14430#7 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#7 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | We ï¬ nally apply our framework to a case study on the recent RedPajama 1.2 trillion token dataset [57]. We use the data mixture produced by SKILL-IT to continually pre-train a 3B parameter model. We ï¬ nd that SKILL-IT achieves higher accuracy with 1B tokens than uniform sampling over data sources with 3B tokens. 2 Skills framework First, we propose deï¬ nitions of skills and ordered skill sets in order to formalize our intuition around how models learn skills, and we demonstrate that not just any existing notion of data groups can characterize an ordered skill set in the dataset. Then, we demonstrate the existence of ordered skill sets on synthetic and real data, which show how viewing data through a skills-based framework can help with training and understanding model performance. Finally, we explore unsupervised skill recovery from data, ï¬ nding that embedding-based approaches do not adequately recover synthetic skills. | 2307.14430#6 | 2307.14430#8 | 2307.14430 | [
"2101.00027"
]
|
2307.14430#8 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | # 2.1 Deï¬ nitions We ï¬ rst present a deï¬ nition of an individual skill. Let the input space of all possible text data be X , where x â X is an individual text sample that a next-token-prediction LM f â F : X â X is trained on. We quantify learning via a metric L : F à X â R, which maps from a model and evaluation data to a scalar quantity. In our setup, we use the cross-entropy validation loss applied over next-token predictions as our metric L. | 2307.14430#7 | 2307.14430#9 | 2307.14430 | [
"2101.00027"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.