id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.02033#54
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Evaluating Large Language Models Trained on Code. CoRR abs/2107.03374 (2021). [19] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://vicuna.lmsys.org
2309.02033#53
2309.02033#55
2309.02033
[ "2306.11644" ]
2309.02033#55
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[20] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fe- dus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Web- son, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yan- ping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling Instruction-Finetuned Language Models. CoRR abs/2210.11416 (2022). [21] Alibaba Cloud. 2023. https://tongyi.aliyun.com [22] Alibaba Cloud. 2023. https://www.alibabacloud.com/en/product/machine- learning [23] Yann Collet and Murray Kucherawy. 2021. Zstandard Compression and the â application/zstdâ Media Type. RFC 8878. [24] Together Computer. 2023. RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset. https://github.com/togethercomputer/RedPajama- Data [25] Michael J Cormier, Jonathan R Belyeu, Brent S Pedersen, Joseph Brown, Jo- hannes Köster, and Aaron R Quinlan. 2021. Go Get Data (GGD) is a framework that facilitates reproducible access to genomic data. Nature Communications 12, 1 (2021), 2151. [26] Common Crawl. 2023. https://commoncrawl.org/ [27] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.
2309.02033#54
2309.02033#56
2309.02033
[ "2306.11644" ]
2309.02033#56
Data-Juicer: A One-Stop Data Processing System for Large Language Models
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1). 4171â 4186. [28] Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P. Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen S. Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2022. GLaM: Efficient Scaling of Language Models with Mixture- of-Experts. In ICML. 5547â 5569. [29] EleutherAI. 2023. Pythia-1.4B. https://huggingface.co/EleutherAI/pythia-1.4b [30] Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. 2023. Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective. CoRR abs/2305.15408 (2023).
2309.02033#55
2309.02033#57
2309.02033
[ "2306.11644" ]
2309.02033#57
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[31] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR abs/2101.00027 (2021). [32] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation. [33] Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In EMNLP (Findings). 3356â 3369. [34] Xinyang Geng and Hao Liu. 2023. OpenLLaMA: An Open Reproduction of LLaMA. https://github.com/openlm-research/open_llama [35] Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. 2023.
2309.02033#56
2309.02033#58
2309.02033
[ "2306.11644" ]
2309.02033#58
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Textbooks Are All You Need. arXiv:2306.11644 [cs.CL] [36] Project Gutenberg. 2023. https://www.gutenberg.org/ [37] Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021. Pre- trained models: Past, present and future. AI Open 2 (2021), 225â 250. [38] Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings. CoRR abs/2305.11554 (2023). [39] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021.
2309.02033#57
2309.02033#59
2309.02033
[ "2306.11644" ]
2309.02033#59
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Measuring Massive Multitask Language Understanding. In ICLR. [40] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor. CoRR abs/2212.09689 (2022). [41] Technology Innovation Institute. 2023. Falcon-RW-1B. https://huggingface.co/ tiiuae/falcon-rw-1b [42] Technology Innovation Institute. 2023. Falcon-RW-1B. https://huggingface.co/ FlagAlpha/Atom-7B [43] Gautier Izacard, Patrick S. H. Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022.
2309.02033#58
2309.02033#60
2309.02033
[ "2306.11644" ]
2309.02033#60
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Few-shot Learning with Retrieval Augmented Language Models. CoRR abs/2208.03299 (2022). [44] Abhinav Jain, Hima Patel, Lokesh Nagalapatti, Nitin Gupta, Sameep Mehta, Shanmukha Guttula, Shashank Mujumdar, Shazia Afzal, Ruhi Sharma Mittal, and Vitobha Munigala. 2020. Overview and importance of data quality for machine learning tasks. In KDD. 3561â 3562. [45] Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, and Xiangang Li. 2023. BELLE: Be Everyoneâ s Large Language model Engine. https: //github.com/LianjiaTech/BELLE. jsonargparse. 2023. https://github.com/omni-us/jsonargparse [46] [47] Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating Training Data Mitigates Privacy Risks in Language Models. In ICML. 10697â 10707. [48] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. 2023. OpenAssistant Conversations - Democratizing Large Language Model Alignment. CoRR abs/2304.07327 (2023). [49] Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In EMNLP. [50] Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In EMNLP (Demonstration).
2309.02033#59
2309.02033#61
2309.02033
[ "2306.11644" ]
2309.02033#61
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[51] Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Vil- lanova del Moral, Teven Le Scao, Leandro von Werra, Chenghao Mou, Ed- uardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Sasko, Quentin Lhoest, Angelina McMillan-Major, Gérard Dupont, Stella Biderman, Anna Rogers, Loubna Ben Allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, So- maieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian Zhu, Daniel van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pe- dro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Alexandra Sasha Luccioni, and Yacine Jernite. 2022. The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset. In NeurIPS. [52] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2022. Deduplicating Training Data Makes Language Models Better.
2309.02033#60
2309.02033#62
2309.02033
[ "2306.11644" ]
2309.02033#62
Data-Juicer: A One-Stop Data Processing System for Large Language Models
In ACL (1). 8424â 8445. [53] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In EMNLP (1). 3045â 3059. [54] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART:
2309.02033#61
2309.02033#63
2309.02033
[ "2306.11644" ]
2309.02033#63
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. 7871â 7880. [55] Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Sasko, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Pa- try, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément De- langue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander M. Rush, and Thomas Wolf. 2021. Datasets: A Community Library for Natural Language Processing. In EMNLP (Demos). 175â 184. [56] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2017. Hyperband: A novel bandit-based approach to hyperparameter optimization.
2309.02033#62
2309.02033#64
2309.02033
[ "2306.11644" ]
2309.02033#64
Data-Juicer: A One-Stop Data Processing System for Large Language Models
J. Mach. Learn. Res. 18 (2017), 185:1â 185:52. [57] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Ko- cetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier De- haene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Car- los Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. 2023. StarCoder: may the source be with you! CoRR abs/2305.06161 (2023). [58] Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, and You Zhang. 2023. ChatDoc- tor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge. CoRR abs/2303.14070 (2023).
2309.02033#63
2309.02033#65
2309.02033
[ "2306.11644" ]
2309.02033#65
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[59] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michi- hiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaud- hary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022.
2309.02033#64
2309.02033#66
2309.02033
[ "2306.11644" ]
2309.02033#66
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Holistic Evaluation of Language Models. CoRR abs/2211.09110 (2022). [60] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment. CoRR abs/2303.16634 (2023). [61] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023.
2309.02033#65
2309.02033#67
2309.02033
[ "2306.11644" ]
2309.02033#67
Data-Juicer: A One-Stop Data Processing System for Large Language Models
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. CoRR abs/2301.13688 (2023). [62] Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, and Daphne Ippolito. 2023. A Pretrainerâ s Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity. CoRR abs/2305.13169 (2023). Ilya Loshchilov and Frank Hutter. 2017. Fixing Weight Decay Regularization in Adam. CoRR abs/1711.05101 (2017).
2309.02033#66
2309.02033#68
2309.02033
[ "2306.11644" ]
2309.02033#68
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[63] [64] LZ4. 2023. https://www.lz4.org/ [65] Kamil Malinka, Martin Peresà ni, Anton Firc, Ondrej Hujnak, and Filip Janus. 2023. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree? CoRR abs/2303.11146 (2023). [66] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jor- dan, and Ion Stoica. 2018. Ray: A Distributed Framework for Emerging AI Applications.
2309.02033#67
2309.02033#69
2309.02033
[ "2306.11644" ]
2309.02033#69
Data-Juicer: A One-Stop Data Processing System for Large Language Models
In OSDI. 561â 577. [67] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In ICLR. [68] OpenAI. 2022. Our approach to alignment research. OpenAI Blog (August 2022). [69] OpenAI. 2023. GPT-4 Technical Report. CoRR abs/2303.08774 (2023). [70] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022.
2309.02033#68
2309.02033#70
2309.02033
[ "2306.11644" ]
2309.02033#70
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Training language models to follow instructions with human feedback. In NeurIPS. [71] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The RefinedWeb Dataset for Falcon LLM: Outperform- ing Curated Corpora with Web Data, and Web Data Only. CoRR abs/2306.01116 (2023). [72] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL-HLT. 2227â 2237. [73] Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. 2023.
2309.02033#69
2309.02033#71
2309.02033
[ "2306.11644" ]
2309.02033#71
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Reasoning with Language Model Prompting: A Survey. arXiv:2212.09597 [cs.CL] [74] Zheng Lin Qingyi Si. 2023. Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface. https://github.com/PhoebusSi/alpaca-CoT [75] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. (2018). [76] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. [77] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.
2309.02033#70
2309.02033#72
2309.02033
[ "2306.11644" ]
2309.02033#72
Data-Juicer: A One-Stop Data Processing System for Large Language Models
J. Mach. Learn. Res. (2020), 140:1â 140:67. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In KDD. 3505â 3506. [79] Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, An- drey Bout, Irina Piontkovskaya, Jiansheng Wei, Xin Jiang, Teng Su, Qun Liu, and Jun Yao. 2023. PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing. CoRR abs/2303.10845 (2023).
2309.02033#71
2309.02033#73
2309.02033
[ "2306.11644" ]
2309.02033#73
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[80] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muen- nighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jer- nite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176B- Parameter Open-Access Multilingual Language Model. CoRR abs/2211.05100 (2022). [81] Omid Shahmirzadi, Adam Lugowski, and Kenneth Younge. 2019. Text similarity in vector space models: a comparative study. In ICMLA. 659â
2309.02033#72
2309.02033#74
2309.02033
[ "2306.11644" ]
2309.02033#74
Data-Juicer: A One-Stop Data Processing System for Large Language Models
666. [82] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Fre- itas. 2015. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 104, 1 (2015), 148â 175. [83] Noam Shazeer. 2020. GLU Variants Improve Transformer. abs/2002.05202 (2020). [84] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023). [85] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019.
2309.02033#73
2309.02033#75
2309.02033
[ "2306.11644" ]
2309.02033#75
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019). [86] Soldaini, Luca and Lo, Kyle and Kinney, Rodney and Naik, Aakanksha and Ravichander, Abhilasha and Bhagia, Akshita and Groeneveld, Dirk and Schwenk, Dustin and Magnusson, Ian and Chandu, Khyathi. 2023. The Dolma Toolkit. Apache 2.0 License, Version 0.9.0, https://github.com/allenai/dolma. # [87] Streamlit. 2023. https://streamlit.io/ [88] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. RoFormer: Enhanced Transformer with Rotary Position Embedding. CoRR abs/2104.09864 (2021). [89] Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. 2023. PandaGPT: One Model To Instruction-Follow Them All. CoRR abs/2305.16355 (2023). [90] Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, and Haifeng Wang. 2021.
2309.02033#74
2309.02033#76
2309.02033
[ "2306.11644" ]
2309.02033#76
Data-Juicer: A One-Stop Data Processing System for Large Language Models
ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation. CoRR abs/2107.02137 (2021). [91] Zhongxiang Sun. 2023. A Short Survey of Viewing Large Language Models in Legal Aspect. CoRR abs/2303.09136 (2023). [92] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023.
2309.02033#75
2309.02033#77
2309.02033
[ "2306.11644" ]
2309.02033#77
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_ alpaca. [93] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. CoRR abs/2302.13971 (2023). [94] Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. 2022.
2309.02033#76
2309.02033#78
2309.02033
[ "2306.11644" ]
2309.02033#78
Data-Juicer: A One-Stop Data Processing System for Large Language Models
What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Gener- alization?. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA (Proceedings of Machine Learning Research, Vol. 162). 22964â 22984. [95] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amir- reza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022.
2309.02033#77
2309.02033#79
2309.02033
[ "2306.11644" ]
2309.02033#79
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Super- NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks. In EMNLP. 5085â 5109. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned Language Models are Zero-Shot Learners. In ICLR. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. [96] [97] [98] Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fe- dus. 2022. Emergent Abilities of Large Language Models. CoRR abs/2206.07682 (2022).
2309.02033#78
2309.02033#80
2309.02033
[ "2306.11644" ]
2309.02033#80
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Jerry W. Wei, Le Hou, Andrew K. Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, and Quoc V. Le. 2023. Symbol tuning improves in-context learning in language models. CoRR abs/2305.08298 (2023). [99] Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, and Wenjuan Han. 2023.
2309.02033#79
2309.02033#81
2309.02033
[ "2306.11644" ]
2309.02033#81
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Zero-Shot Information Extraction via Chatting with ChatGPT. CoRR abs/2302.10205 (2023). [100] Wikipedia. 2023. https://en.wikipedia.org/wiki/Main_Page [101] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De- langue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In EMNLP (Demos). 38â 45. [102] Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David S. Rosenberg, and Gideon Mann. 2023. BloombergGPT: A Large Language Model for Finance. CoRR abs/2303.17564 (2023). [103] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu. 2023.
2309.02033#80
2309.02033#82
2309.02033
[ "2306.11644" ]
2309.02033#82
Data-Juicer: A One-Stop Data Processing System for Large Language Models
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining. CoRR abs/2305.10429 (2023). [104] Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. FinGPT: Open- Source Financial Large Language Models. CoRR abs/2306.06031 (2023). [105] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2022. GLM-130B: An Open Bilingual Pre-trained Model. abs/2210.02414 (2022). [106] Biao Zhang and Rico Sennrich. 2019. Root Mean Square Layer Normalization. In NeurIPS. 12360â 12371. [107] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuo- hui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models. CoRR abs/2205.01068 (2022). [108] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A Survey of Large Language Models. CoRR abs/2303.18223 (2023).
2309.02033#81
2309.02033#83
2309.02033
[ "2306.11644" ]
2309.02033#83
Data-Juicer: A One-Stop Data Processing System for Large Language Models
[109] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models. CoRR abs/2304.06364 (2023). [110] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Ur- tasun, Antonio Torralba, and Sanja Fidler. 2015.
2309.02033#82
2309.02033#84
2309.02033
[ "2306.11644" ]
2309.02033#84
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In ICCV. 19â 27. APPENDIX OF DATA-JUICER: A ONE-STOP DATA PROCESSING SYSTEM FOR LARGE LANGUAGE MODELS A ADDITIONAL DETAILS OF DATA-JUICER A.1 Base Classes of OPs in Data-Juicer We illustrate the core base classes of operators (OPs) in Data-Juicer at listing 1. # A.2 Theoretical Analysis of Space Usage for Caches and Checkpoints Caches are generated after some of the functions of Dataset, such as map, filter. Generally, caches can be categorized into cache data and indices. The total size of a set of indices is very small so we can ignore these parts when conducting the space usage analysis. On the contrary, the size of the cache data is nearly the same as the input dataset. Here we assume that the sizes of cache data and checkpoints are all the same as the input datasetâ s size. And there must be one cache data file for the original dataset after itâ s loaded. Assume that there are ð Mappers, ð ¹ Filters, and ð · Deduplicators in the processing configuration, and the size of the original dataset is ð , the detailed analysis for cache mode and checkpoint mode is shown below. Space Usage of Cache Mode. Caches are generated after each OP. Mappers, Filters, and Deduplicators only generate one set of cache data. Besides, the first Filter would generate an extra set of cache data because a new column for storing statistics will be added to the dataset. Therefore the total disk space usage of caches is:
2309.02033#83
2309.02033#85
2309.02033
[ "2306.11644" ]
2309.02033#85
Data-Juicer: A One-Stop Data Processing System for Large Language Models
ð ð ð ð ð [ð ð ð â ð _ð ð ð ð ] = (1 + ð + ð ¹ + I(ð ¹ > 0) + ð ·) à ð , where I(·) is the indicator function, which returns 1 when · is true, otherwise returns 0. Space Usage of Checkpoint Mode. Checkpoints are only gen- erated when any exception or error occurs. However, caches are still stored after disabling the cache mode due to the features of Dataset. We clean up older caches after each OP. The detailed cleanup pipeline is: 1). OPð finished, 2). caches for OPð generated, 3). caches for OPð â 1 cleaned up.
2309.02033#84
2309.02033#86
2309.02033
[ "2306.11644" ]
2309.02033#86
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Thus there exists at most two sets of caches at the same time theoretically in step 2. Considering the caches of the original dataset, the peak disk space usage of caches in checkpoint mode is: # ð ð ð ð ð [ð â ð ð ð ð ð ð ð ð ¡ _ð ð ð ð ] = 3 Ã ð . # B ADDITIONAL NUMERICAL RESULTS # Table 5: Evaluation results of three types of quality classifiers. Quality Classifier Precision Recall F1 GPT-3 96.82% 98.14% 97.47% Chinese 98.00% 99.30% 98.64% Code 71.23% 54.21% 61.56%
2309.02033#85
2309.02033#87
2309.02033
[ "2306.11644" ]
2309.02033#87
Data-Juicer: A One-Stop Data Processing System for Large Language Models
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 # class Formatter: # ... def load_dataset(self, *args) -> Dataset: ... ... # class Mapper: ... def process(self, sample: Dict) -> Dict: ... ... # class Filter: ... def compute_stats(self, sample: Dict) -> Dict: ... # def process(self, sample: Dict) -> bool: ... ... # class Deduplicator: # ... def compute_hash(self, sample: Dict) -> Dict: ... def process(self, dataset: Dataset) -> Dataset: ... ... # Listing 1: The illustration of OP base classes in Data-Juicer. B.1 Quality Classifier Firstly, we will show how we can reproduce the GPT-3 and achieve comparable performance. We follow the training procedure of quality classifier in GPT- 3 [9] that used a logistic regression classifier with features from standard tokenizer and HashingTF of PySpark. Based on this, we expand this training pipeline to Chinese text and various code types. The training details are listed in Table 6, where the keeping method includes:
2309.02033#86
2309.02033#88
2309.02033
[ "2306.11644" ]
2309.02033#88
Data-Juicer: A One-Stop Data Processing System for Large Language Models
label: ð ð ð _ð ð ð ð ð > 0.5 â ¢ pareto [9]: ð ð ð _ð ð ð ð ð > 1 â ð ð .ð ð ð ð ð ð .ð ð ð ð ð ¡ð (ð ¼), ð ¼ = 9 We split these datasets into training and evaluation splits with a split ratio of 4:1. Then these classifiers trained on the training split are evaluated on the evaluation split. Experimental results are shown in Table 5. As we can see, reproduced GPT-3 and its Chinese version perform well except for the Code version. We speculate that the positive and negative splitting method for Code quality classifier now might not be a good choice, and we leave this issue to future research. Besides, we compare keeping ratios when using these classifiers to re-sample CommonCrawl between the original GPT-3 quality classifier and our reproduced classifiers, which is shown in Table 4. The keeping ratio of the original GPT-3 quality classifier is estimated by the data size before and after filtering described in GPT-3 paper [9]. We can see that the keeping ratios of our reproduced GPT-3 quality classifiers are basically aligned with the original one. B.2 Data Recipes For pre-training data, we acquired a vast amount of raw textual corpora primarily following the procedural guidelines of RedPa- jama [24] and the Pile [31]. The common subsets were merged and # Table 6: Training configuration of 3 types of quality classifiers. Quality Classifier Tokenizer Keep Method Positive Datasets Negative Datasets GPT-3 Standard Tokenizer pareto Wikipedia-en & books1 & OpenWebText2 CommonCrawl Chinese Sentencepiece label Wikipeida-zh & Wudao Samples in Chinese from CommonCrawl Code Sentencepiece label Samples with max_stars_count>=1372 from TheStack Random Samples from the rest of TheStack subjected to Data-Juicer refinements. The resultant data recipe is presented in Table 7, which covers 15 prominent components. We use the SentencePiece [50] tokenizer as implemented in GPT-NeoX- 20B [7] to prepare text and report the counted number of tokens. The sampling proportion is the normalization of token numbers, except for Books and Wikipedia, which undergo 2 and 2.5 epochs respectively, to enhance the weighting of high-quality corpora.
2309.02033#87
2309.02033#89
2309.02033
[ "2306.11644" ]
2309.02033#89
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Table 7: Statistics of Data-Juicerâ s pre-training data. Component #Tokens CommonCrawl C4 360,925,581,674 181,951,688,729 44.91% 22.64% GitHub 65,076,921,292 8.10% Books Wikipedia 26,389,944,579 17,615,935,449 6.57% 5.48% arXiv 29,093,082,586 3.62% PubMed Central StackExchange 25,589,708,647 19,793,629,900 3.18% 2.46% FreeLaw 13,057,506,102 1.62% PubMed Abstracts 5,208,343,613 0.65% USPTO 4,021,281,155 0.50% EuroParl 780,962,770 0.10% HackerNews 485,584,871 0.06% PhilPapers 478,040,431 0.06% HIH ExPorter 436,414,852 0.05% Sampling prop. For fine-tuning data, we merge and refine tens of Alpaca-CoT datasets. Each dataset can be categorized into English, Chinese and Multilingual by language; into instruct fine-tuning, and chat fine-tuning including sinlge-round dialog, multi-round dialog and preference by usage; multi-task and task-specific by task type; and human-generated, self-instruct, and mixed collection of datasets by the generation method. The detailed numbers of datasets for each category are presented in Table 8. Table 8: Statistics of Data-Juicer fine-tuning data used in our experiments. â These tags are newly added by Data-Juicer compared to the original tag sets of Alpaca-CoT [74].â
2309.02033#88
2309.02033#90
2309.02033
[ "2306.11644" ]
2309.02033#90
Data-Juicer: A One-Stop Data Processing System for Large Language Models
CFTâ indicates Chat Fine-Tuning. Category Sub-Category #Datasets English 28 Language Chinese 14 Multilingual 3 Instruct Fine-Tuning (IFT) 17 Usageâ CFT: Single-Round Dialog CFT: Multi-Round Dialog 23 2 CFT: Preference 5 Task Type Multi-Task Task-Specific 27 13 Human-Generated 3 Generation Method Self-Instruct Mixted 12 5 Collection of Datasets 19 B.3 Experiments Details B.3.1 Models and Training For Pre-training Data. We adhere to the official paper [93] and leverage open-source implementation [34] to build standard LLaMA models. Basically, it is to apply RM- SNorm [106], the SwiGLU activation [83], and rotary positional embedding [88] on the decoder-only transformer architecture. The LLaMA-1.3B model is composed of 24 transformer layers, each with 16 self-attention heads and 2048 bottleneck units. LLMs are pre-trained using the AdamW optimizer [63] with hyper-parameters ð
2309.02033#89
2309.02033#91
2309.02033
[ "2306.11644" ]
2309.02033#91
Data-Juicer: A One-Stop Data Processing System for Large Language Models
½1 = 0.9 and ð ½2 = 0.95. For LLaMA-1.3B, the initial learning rate gradually increases to 2e-5 using 1% warm-up steps and finally decays to 10% through a cosine schedule. The weight decay is set to 0.1 and the gradient â 2-norm is clipped to 1.0. More information about these datasets can be found on the More information about these datasets can be found on the Data-Juicer recipes page? of our repository. # Data-Juicer recipes page2 of our repository. 2https://github.com/alibaba/data-juicer/blob/main/configs/data_juicer_recipes B.3.2 Models and Training of Fine-Tuning Data. In fine-tuning, we choose LLaMA-7B as our basic model and fine-tuned it for 3 epochs. We follow the hyper-parameter settings in Alpaca [92]. Specifically, the optimizer is AdamW with a learning rate of 2e-5, global batch size of 256, and weight decay of 0. The learning rate schedules in a cosine style with 3% initial warm-up steps. Regarding the data recipes in Table 3, for (CFT, EN) case, we consider 5 competitive subsets (Alpaca, GPTeacher, FastChat, Gua- naco, and CodeAlpaca) from Alpaca-CoT as candidate datasets; for (CFT, ZH) case, we use (AlpacaGPT4, Belle, Instinwild) as candi- date datasets. Generally speaking, we bucket from these candidate datasets according to more than a dozen built-in analytical dimen- sions, sampling a fixed amount of data from each dimension to increase the diversity of the processed data as appropriately as possible. More detailed hyper-parameters of data processing can be found in our released data recipes. Both the pre-trained and fine-tuned reference models are re- leased in our homepage. B.3.3 System Performance Experiments. The experiments of end-to-end processing mentioned in section 7.2.1 are all conducted on the same machine with 128 cores of Intel(R) Xeon(R) Platinum 8369B models and about 990GB memory.
2309.02033#90
2309.02033#92
2309.02033
[ "2306.11644" ]
2309.02033#92
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Before starting these experiments, the original datasets, third-party models, and other as- sets will be prepared in advance for both baselines and Data-Juicer, and the intermediate cache files will be cleaned after every com- plete process for Data-Juicer. After processing, we use the same number of processes for processing the dataset to export the result dataset to the local SSD. As for the resource monitoring tool, itâ s implemented based on the psutil3 library. It samples the memory for all related processes every second during the processing pipeline. Then we compute the average memory usage by summing the memory usage over all processes and dividing by the number of processes used in each experiment. Finally, we aggregate all data and compute the average memory usage over time.
2309.02033#91
2309.02033#93
2309.02033
[ "2306.11644" ]
2309.02033#93
Data-Juicer: A One-Stop Data Processing System for Large Language Models
B.3.4 End-to-end System Baselines. We mainly compared the end-to-end system performance between our Data-Juicer and two state-of-the-art baselines in the above experiments w.r.t system performance: RedPajama [24] and Dolma [86]. Besides the empirical comparsiton in Sec.7.2.1, here we make more detailed introduction and comparison about them. RedPajama. 4 The RedPajama project, developed by Together AI, initially aims to reproduce the LLaMA training dataset [93] and open-source the entire code for data collection and processing, making it a significant and popular contribution to the LLM com- munity. This is the primary reason for selecting it as our baseline. RedPajama provides a reproduced version of all seven subsets of the LLaMA training dataset, including arXiv, Books, C4, Common- Crawl, GitHub Code, Stack Exchange, and Wikipedia. While RedPajama has made valuable contributions, our work explores different aspects and offers complementary features. For instance: (1) RedPajamaâ s design is closely tied to specific datasets, which present challenges for adapting its data processing pipelines to other datasets. (2) Its focus on reproducing the LLaMA datasets lead to trade-offs in efficiency, which is not the primary concern of the RedPajama project. (3) The current data processing component in RedPajama lacks systematization and customization.
2309.02033#92
2309.02033#94
2309.02033
[ "2306.11644" ]
2309.02033#94
Data-Juicer: A One-Stop Data Processing System for Large Language Models
Adding new 3https://github.com/giampaolo/psutil 4We compared RedPajama in our experiments with its github commit ID as: 45b37c2a1d1e495b0f48549ef3ce03ff029f7881. data processing methods to the existing pipelines would require understanding and modifying a significant portion of the code. As a result, most users typically opt to utilize the RedPajama Dataset directly rather than attempting to customize or improve its data processing pipelines. Dolma. 5 The Dolma project, originating from Allen AI, com- prises two components: the Dolma Dataset and the Dolma Toolkit. It is also a newly established data processing initiative. We selected the Dolma Toolkit as a baseline because its objective of generating pre-training data for language modeling aligns with one of our target data types (we focus on both pre-training and fine-tuning data). The toolkit offers numerous â Taggersâ that enable attribute tagging (analogous to â statsâ in Data-Juicer) for each document sample. These tags are then used to filter out samples with undesir- able attributes. Users have the flexibility to create custom taggers tailored to their specific needs. However, we encountered several limitations when using Dolma for dataset processing. Firstly, Dolmaâ s workflow involves multi- ple stagesâ tagging, deduplication, mixing, and various configura- tionsâ lacking support for an end-to-end data processing pipeline. Secondly, to leverage high-performance parallel processing, users are required to partition the input dataset into multiple shards in advance, incurring additional overhead. Thirdly, Dolma imposes certain requirements on input datasets, such as mandatory fields and a specific directory structure, necessitating further preprocess- ing before use. Moreover, it restricts input formats to JSONL or its gzipped variant. These constraints diminish the toolkitâ s flexibility, thereby increasing the cost of use and rendering the Dolma Toolkit relatively less user-friendly. B.3.5 Scalability. Our experiments are performed on a platform comprising 16 servers, each equipped with a 64-core Intel(R) Xeon(R) Platinum CPU (mix of 8269CY and 8163 models) and 512 GB of memory. The network bandwidth shared among these servers is 20 Gbps. We utilize NAS storage to house both the raw data and the processed results.
2309.02033#93
2309.02033#95
2309.02033
[ "2306.11644" ]
2309.02033#95
Data-Juicer: A One-Stop Data Processing System for Large Language Models
For the scalability experiments, we consider the two baselines as follows: â ¢ Data-Juicer on Ray: We implement a Ray [66] executor for Data-Juicer, which only adaptes the underlying interfaces of the HuggingFace-datasets with Ray-datasets, while all OPs of Data-Juicer remain unchanged. This implies that usersâ code based on our native Python version can be seamlessly migrated from a single-machine version to distributed computing environ- ments. â ¢ Data-Juicer on Beam: This method is based on Apache Beam with the Apache Flink Runner. When compared to the Ray ver- sion, the Beam version requires additional code development to meet the demands of the Beam data processing pipeline. This in- cludes the adaptations of several OPs and the replacement of the Formatter/Exporter with counterparts in Beam. B.4 Per-Task Evaluation For a thorough and consolidated assessment, we have summarized the individual scores of evaluated LLMs on the 16 core HELM assessment tasks in Table 9. 5We compared Dolma in our experiments with its github commit 5a010a2685914b1db7744426abfb4b9ece52da95.
2309.02033#94
2309.02033#96
2309.02033
[ "2306.11644" ]
2309.02033#96
Data-Juicer: A One-Stop Data Processing System for Large Language Models
ID as: Table 9: Evaluation results on 16 core tasks of HELM benchmark. Task Falcon-1.3B Pythia-1.4B LLaMA-1.3B (Data-Juicer) MMLU 24.7 26.0 25.9 BoolQ 63.0 56.0 49.0 NarrativeQA 32.1 31.5 38.2 NaturalQuestions (closed-book) 10.7 10.5 10.1 NaturalQuestions (open-book) 50.0 49.8 45.9 QuAC 24.3 26.5 26.0 HellaSwag 67.0 57.0 56.0 OpenbookQA 44.0 34.0 40.0 TruthfulQA 19.0 21.0 33.0 MS MARCO (regular) 16.8 12.9 11.2 MS MARCO (TREC) 33.5 27.4 26.9 IMDB 55.0 84.0 80.0 XSUM 5.7 6.5 5.2 CNN/DailyMail 4.0 8.4 7.8 CivilComments 49.4 49.7 50.1 RAFT 44.3 42.3 42.1 27.0 56.0 49.9 11.2 54.3 21.7 52.0 43.0 33.0 12.1 28.1 84.0 5.3 11.1 50.0 49.3 # LLaMA-1.3B (Data-Juicer IFT)
2309.02033#95
2309.02033
[ "2306.11644" ]
2309.02427#0
Cognitive Architectures for Language Agents
3 2 0 2 p e S 7 2 ] I A . s c [ 2 v 7 2 4 2 0 . 9 0 3 2 : v i X r a # Cognitive Architectures for Language Agents Theodore R. Sumersâ Shunyu Yaoâ Karthik Narasimhan Thomas L. Griffiths Princeton University {sumers, shunyuy, karthikn, tomg}@princeton.edu # Abstract
2309.02427#1
2309.02427
[ "2305.14909" ]
2309.02427#1
Cognitive Architectures for Language Agents
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes todayâ s language agents within the broader history of AI and outlines a path towards language-based general intelligence. # 1 Introduction Language agents (Weng, 2023; Wang et al., 2023b; Xi et al., 2023; Yao and Narasimhan, 2023) are an emerging class of artifical intelligence (AI) systems that use large language models (LLMs; Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2019; OpenAI, 2023a) to interact with the world. They apply the latest advances in LLMs to the existing field of agent design (Russell and Norvig, 2013). Intriguingly, this synthesis offers benefits for both fields. On one hand, LLMs possess limited knowledge and reasoning capabilities. Language agents mitigate these issues by connecting LLMs to internal memory and environments, grounding them to existing knowledge or external observations. On the other hand, traditional agents often require handcrafted rules (Wilkins, 2014) or reinforcement learning (Sutton and Barto, 2018), making generalization to new environments challenging (Lake et al., 2016). Language agents leverage commonsense priors present in LLMs to adapt to novel tasks, reducing the dependence on human annotation or trial-and-error learning.
2309.02427#0
2309.02427#2
2309.02427
[ "2305.14909" ]
2309.02427#2
Cognitive Architectures for Language Agents
While the earliest agents used LLMs to directly select or generate actions (Figure 1B; Ahn et al., 2022; Huang et al., 2022b), more recent agents additionally use them to reason (Yao et al., 2022b), plan (Hao et al., 2023; Yao et al., 2023), and manage long-term memory (Park et al., 2023; Wang et al., 2023a) to improve decision-making. This latest generation of cognitive language agents use remarkably sophisticated internal processes (Figure 1C). Today, however, individual works use custom terminology to describe these processes (such as â tool useâ , â groundingâ , â actionsâ
2309.02427#1
2309.02427#3
2309.02427
[ "2305.14909" ]
2309.02427#3
Cognitive Architectures for Language Agents
), making it difficult to compare different agents, understand how they are evolving over time, or build new agents with clean and consistent abstractions. In order to establish a conceptual framework organizing these efforts, we draw parallels with two ideas from the history of computing and artificial intelligence (AI): production systems and cognitive architectures. Production systems generate a set of outcomes by iteratively applying rules (Newell and Simon, 1972). They originated as string manipulation systems â an analog of the problem that LLMs solve â and were subsequently adopted by the AI community to define systems capable of complex, hierarchically structured behaviors (Newell et al., 1989). To do so, they were incorporated into cognitive architectures that specified control flow for selecting, applying, and even generating new productions (Laird et al., 1987; Laird, 2022;
2309.02427#2
2309.02427#4
2309.02427
[ "2305.14909" ]
2309.02427#4
Cognitive Architectures for Language Agents
â Equal contribution, order decided by coin flip. Each person reserves the right to list their name first. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents. 1 Cc Cognitive Language Agent = nen Reasoning rete Learn \ ( ) B Language Agent Observations NO ~ Actions | Actions \ Environment Environment Figure 1: Different uses of large language models (LLMs). A: In natural language processing (NLP), an LLM takes text as input and outputs text. B: Language agents (Ahn et al., 2022; Huang et al., 2022c) place the LLM in a direct feedback loop with the external environment by transforming observations into text and using the LLM to choose actions. C: Cognitive language agents (Yao et al., 2022b; Shinn et al., 2023; Wang et al., 2023a) additionally use the LLM to manage the agentâ s internal state via processes such as learning and reasoning. In this work, we propose a blueprint to structure such agents.
2309.02427#3
2309.02427#5
2309.02427
[ "2305.14909" ]
2309.02427#5
Cognitive Architectures for Language Agents
Kotseruba and Tsotsos, 2020). We suggest a meaningful analogy between production systems and LLMs: just as productions indicate possible ways to modify strings, LLMs define a distribution over changes or additions to text. This further suggests that controls from cognitive architectures used with production systems might be equally applicable to transform LLMs into language agents. Thus, we propose Cognitive Architectures for Language Agents (CoALA), a conceptual framework to understand existing language agents and help develop new ones. CoALA organizes agents along three key dimensions: their information storage (divided into working and long-term memories); their action space (divided into internal and external actions); and their decision-making procedure (which is structured as an interactive loop with planning and execution). Through these three concepts (memory, action, and decision-making), we show CoALA can neatly express a large body of diverse agents and identify underexplored directions. Notably, while several recent papers propose conceptual architectures for general intelligence (LeCun, 2022; McClelland et al., 2019) or empirically survey language models and agents (Mialon et al., 2023; Weng, 2023; Wang et al., 2023b), this paper combines elements of both: we propose a theoretical framework and use it to organize diverse empirical work. This grounds our theory to existing practices and allows us to identify both short-term and long-term directions for future work. The plan for the rest of the paper is as follows. Section 2 introduces production systems and cognitive architectures, and Section 3 outlines their parallels with LLMs and language agents. Section 4 introduces the CoALA framework, and surveys and organizes diverse language agents accordingly. Section 5 provides a deeper case study of several prominent agents. Section 6 suggests actionable steps to construct future language agents, while Section 7 highlights open questions in the broader arc of cognitive science and AI. Finally, Section 8 concludes. Readers interested in applied agent design may prioritize Sections 4-6. # 2 Background:
2309.02427#4
2309.02427#6
2309.02427
[ "2305.14909" ]
2309.02427#6
Cognitive Architectures for Language Agents
From Strings to Symbolic AGI We first introduce production systems and cognitive architectures, providing a historical perspective on cognitive science and artificial intelligence: beginning with theories of logic and computation (Post, 1943), and ending with attempts to build symbolic artificial general intelligence (Newell et al., 1989). 2 # 2.1 Production systems for string manipulation In the first half of the twentieth century, a significant line of intellectual work led to the reduction of mathematics (Whitehead and Russell, 1997) and computation (Church, 1932; Turing et al., 1936) to symbolic manipulation. Production systems are one such formalism. Intuitively, production systems consist of a set of rules, each specifying a precondition and an action. When the precondition is met, the action can be taken. The idea originates in efforts to characterize the limits of computation. Post (1943) proposed thinking about arbitrary logical systems in these terms, where formulas are expressed as strings and the conclusions they license are identified by production rules (as one string â
2309.02427#5
2309.02427#7
2309.02427
[ "2305.14909" ]
2309.02427#7
Cognitive Architectures for Language Agents
producesâ another). This formulation was subsequently shown to be equivalent to a simpler string rewriting system. In such a system, we specify rules of the form X Y Z â X W Z indicating that the string XY Z can be rewritten to the string XW Z. String rewriting plays a significant role in the theory of formal languages, in the form of Chomskyâ s phrase structure grammar (Chomsky, 1956). # 2.2 Control flow: From strings to algorithms By itself, a production system simply characterizes the set of strings that can be generated from a starting point. However, they can be used to specify algorithms if we impose control flow to determine which productions are executed. For example, Markov algorithms are production systems with a priority ordering (Markov, 1954). The following algorithm implements division-with-remainder by converting a number written as strokes | into the form Q â R, where Q is the quotient of division by 5 and R is the remainder: â ||||| â | â â ¢â â â â â
2309.02427#6
2309.02427#8
2309.02427
[ "2305.14909" ]
2309.02427#8
Cognitive Architectures for Language Agents
where the priority order runs from top to bottom, productions are applied to the first substring matching their preconditions when moving from left to right (including the empty substring, in the last production), and â ¢â â indicates the algorithm halts after executing the rule. The first rule effectively â subtractsâ five if possible; the second handles the termination condition when no more subtraction is possible; and the third handles the empty substring input case. For example, given the input 11, this would yield the sequence of productions â ||||||||||| â | â |||||| â || â | â ¢â â || â | which is interpreted as 2 remainder 1. Simple productions can result in complex behavior â Markov algorithms can be shown to be Turing complete. # 2.3 Cognitive architectures: From algorithms to agents Production systems were popularized in the AI community by Allen Newell, who was looking for a formalism to capture human problem solving (Newell, 1967; Newell and Simon, 1972). Productions were generalized beyond string rewriting to logical operations: preconditions that could be checked against the agentâ s goals and world state, and actions that should be taken if the preconditions were satisfied. In their landmark book Human Problem Solving (Newell and Simon, 1972), Allen Newell and Herbert Simon gave the example of a simple production system implementing a thermostat agent:
2309.02427#7
2309.02427#9
2309.02427
[ "2305.14909" ]
2309.02427#9
Cognitive Architectures for Language Agents
(temperature > 70â ¦) â § (temperature < 72â ¦) â stop (temperature < 70â ¦) â § (furnace off) â turn on furnace (temperature > 72â ¦) â § (furnace on) â turn off furnace temperature < 32â ¦ â call for repairs; turn on electric heater Following this work, production systems were adopted by the AI community. The resulting agents con- tained large production systems connected to external sensors, actuators, and knowledge bases â requiring correspondingly sophisticated control flow. AI researchers defined â cognitive architecturesâ that mimicked human cognition â explicitly instantiating processes such as perception, memory, and planning (Adams et al.,
2309.02427#8
2309.02427#10
2309.02427
[ "2305.14909" ]
2309.02427#10
Cognitive Architectures for Language Agents
3 Symbolic Long-Term Memories: Procedural Semantic Episodic v & fb Sy Proposal and Evalutation Chunking Semantic Episodic Learning Leaming v Â¥ â ) a) Symbolic Working Memory Ns, * Application Preference Memory Decision Procedure Spatial-Visual System Perceptual LT Memory Other Perception Visual Perception ry x Embodiment Figure 2: Cognitive architectures augment a production system with sensory groundings, long-term memory, and a decision procedure for selecting actions. A: The Soar architecture, reproduced with permission from Laird (2022). B: Soarâ
2309.02427#9
2309.02427#11
2309.02427
[ "2305.14909" ]
2309.02427#11
Cognitive Architectures for Language Agents
s decision procedure uses productions to select and implement actions. These actions may be internal (such as modifying the agentâ s memory) or external (such as a motor command). 2012) to achieve flexible, rational, real-time behaviors (Sun, 2004; Newell, 1980; 1992; Anderson and Lebiere, 2003). This led to applications from psychological modeling to robotics, with hundreds of architectures and thousands of publications (see Kotseruba and Tsotsos (2020) for a recent survey). A canonical example is the Soar architecture (Fig. 2A). Soar stores productions in long-term memory and executes them based on how well their preconditions match working memory (Fig. 2B). These productions specify actions that modify the contents of working and long-term memory. We next provide a brief overview of Soar and refer readers to Laird (2022; 2019) for deeper introductions. Memory. Building on psychological theories, Soar uses several types of memory to track the agentâ s state (Atkinson and Shiffrin, 1968). Working memory (Baddeley and Hitch, 1974) reflects the agentâ s current circumstances: it stores the agentâ s recent perceptual input, goals, and results from intermediate, internal reasoning. Long term memory is divided into three distinct types. Procedural memory stores the production system itself: the set of rules that can be applied to working memory to determine the agentâ s behavior. Semantic memory stores facts about the world (Lindes and Laird, 2016), while episodic memory stores sequences of the agentâ s past behaviors (Nuxoll and Laird, 2007).
2309.02427#10
2309.02427#12
2309.02427
[ "2305.14909" ]
2309.02427#12
Cognitive Architectures for Language Agents
Grounding. Soar can be instantiated in simulations (Tambe et al., 1995; Jones et al., 1999) or real-world robotic systems (Laird et al., 2012). In embodied contexts, a variety of sensors stream perceptual input into working memory, where it is available for decision-making. Soar agents can also be equipped with actuators, allowing for physical actions and interactive learning via language (Mohan et al., 2012; Mohan and Laird, 2014; Kirk and Laird, 2014). Decision making. Soar implements a decision loop that evaluates productions and applies the one that matches best (Fig. 2B). Productions are stored in long-term procedural memory. During each decision cycle, their preconditions are checked against the agentâ s working memory. In the proposal and evaluation phase, a set of productions is used to generate and rank a candidate set of possible actions.1 The best action is 1In more detail, Soar divides productions into two types: â operators,â which we refer to as actions, and â rulesâ which are used to propose, evaluate, and execute operators. Differentiating these is conceptually important for Soar but not language agents, and so we elide the distinction.
2309.02427#11
2309.02427#13
2309.02427
[ "2305.14909" ]
2309.02427#13
Cognitive Architectures for Language Agents
4 then chosen.2 Another set of productions is then used to implement the action â for example, modifying the contents of working memory or issuing a motor command. Learning. Soar supports multiple modes of learning. First, new information can be stored directly in long-term memory: facts can be written to semantic memory, while experiences can be written to episodic memory (Derbinsky et al., 2012). This information can later be retrieved back into working memory when needed for decision-making. Second, behaviors can be modified. Reinforcement learning (Sutton and Barto, 2018) can be used to up-weight productions that have yielded good outcomes, allowing the agent to learn from experience (Nason and Laird, 2005). Most remarkably, Soar is also capable of writing new productions into its procedural memory (Laird et al., 1986) â effectively updating its source code. Cognitive architectures were used broadly across psychology and computer science, with applications including robotics (Laird et al., 2012), military simulations (Jones et al., 1999; Tambe et al., 1995), and intelligent tutoring (Koedinger et al., 1997). Yet they have become less popular in the AI community over the last few decades. This decrease in popularity reflects two of the challenges involved in such systems: they are limited to domains that can be described by logical predicates and require many pre-specified rules to function. Intriguingly, LLMs appear well-posed to meet these challenges. First, they operate over arbitrary text, making them more flexible than logic-based systems. Second, rather than requiring the user to specify productions, they learn a distribution over productions via pre-training on an internet corpus. Recognizing this, researchers have begun to use LLMs within cognitive architectures, leveraging their implicit world knowledge (Wray et al., 2021) to augment traditional symbolic approaches (Kirk et al., 2023; Romero et al., 2023). Here, we instead import principles from cognitive architecture to guide the design of LLM-based agents. # 2.4 Language models and agents Language modeling is a decades-old endeavor in the NLP and AI communities, aiming to develop systems that can generate text given some context (Jurafsky, 2000).
2309.02427#12
2309.02427#14
2309.02427
[ "2305.14909" ]
2309.02427#14
Cognitive Architectures for Language Agents
Formally, language models learn a distribution P (wi|w<i), where each w is an individual token (word). This model can then generate text by sampling from the distribution, one token at a time. At its core, a language model is a probabilistic input-output system, since there are inherently several ways to continue a text (e.g., â I went to theâ â â marketâ | â beachâ | ...). While earlier attempts at modeling language (e.g., n-grams) faced challenges in generalization and scaling, there has been a recent resurgence of the area due to the rise of Transformer-based (Vaswani et al., 2017) LLMs with a large number (billions) of parameters (e.g., GPT-4; OpenAI, 2023a) and smart tokenization schemes. Modern LLMs are trained on enormous amounts of data, which helps them accumulate knowledge from a large number of input-output combinations and successfully generate human-like text (Andreas, 2022). Unexpectedly, training these models on internet-scale text also made them useful for many tasks beyond generating text, such as writing code (Li et al., 2022b; Rozière et al., 2023; Li et al., 2023b), modeling proteins (Meier et al., 2021), and acting in interactive environments (Yao et al., 2022b; Nakano et al., 2021).
2309.02427#13
2309.02427#15
2309.02427
[ "2305.14909" ]
2309.02427#15
Cognitive Architectures for Language Agents
The latter has led to the rise of â language agentsâ â systems that use LLMs as a core computation unit to reason, plan, and act â with applications in areas such as robotics (Ahn et al., 2022), web manipulation (Yao et al., 2022a; Deng et al., 2023), puzzle solving (Yao et al., 2023; Hao et al., 2023) and interactive code generation (Yang et al., 2023). The combination of language understanding and decision-making capabilities is an exciting and emerging direction that promises to bring these agents closer to human-like intelligence. # 3 Connections between Language Models and Production Systems Based on their common origins in processing strings, there is a natural analogy between production systems and language models. We first develop this analogy. We then review prompting methods, showing that these efforts recapitulate the algorithms and agents based on production systems â and suggesting that cognitive architectures like those developed for production systems may be usefully applied to LLMs.
2309.02427#14
2309.02427#16
2309.02427
[ "2305.14909" ]
2309.02427#16
Cognitive Architectures for Language Agents
2If no actions are valid, or multiple actions tie, then an impasse occurs. Soar creates a subgoal to resolve the impasse, resulting in hierarchical task decomposition. We refer the reader to Laird (2022) for a more detailed discussion. 5 # Prompting Method # Production Sequence Zero-shot Q ⠼⠼⠼⠼⠸LLM Q A Few-shot (Brown et al., 2020) Q â â Q1 A1 Q2 A2 Q ⠼⠼⠼⠼⠸LLM Q1 A1 Q2 A2 Q A Zero-shot Chain-of-Thought (Kojima et al., 2022) Q â â QStep-by-step ⠼⠼⠼⠼⠸LLM QStep-by-stepA Retrieval Augmented Generation (Lewis et al., 2020) Q Wikiâ â â â Q O ⠼⠼⠼⠼⠸LLM Q O A Socratic Models (Zeng et al., 2022) Q ⠼⠼⠼⠼⠸VLM Q O ⠼⠼⠼⠼⠸LLM Q O A Self-Critique (Saunders et al., 2022) Q ⠼⠼⠼⠼⠸LLM Q A ⠼⠼⠼⠼⠸LLM Q A C ⠼⠼⠼⠼⠸LLM Q A C A Table 1:
2309.02427#15
2309.02427#17
2309.02427
[ "2305.14909" ]
2309.02427#17
Cognitive Architectures for Language Agents
Conceptual diagram illustrating how prompting methods manipulate the input string before generating completions. Q = question, A = answer, O = observation, C = critique, and ⠼⠼⠼⠸ denotes sampling from a stochastic production. These pre-processing manipulations â which can employ other models such as vision-language models (VLMs), or even the LLM itself â can be seen as productions. Prompting methods thus define a sequence of productions. # 3.1 Language models as probabilistic production systems In their original instantiation, production systems specified the set of strings that could be generated from a starting point, breaking this process down into a series of string rewriting operations. Language models also define a possible set of expansions or modifications of a string â the prompt provided to the model.3 For example, we can formulate the problem of completing a piece of text as a production. If X is the prompt and Y the continuation, then we can write this as the production X â X Y .4 We might want to allow multiple possible continuations, in which case we have X â X Yi for some set of Yi. LLMs assign a probability to each of these completions. Viewed from this perspective, the LLM defines a probability distribution over which productions to select when presented with input X, yielding a distribution P (Yi|X) over possible completions (Dohan et al., 2022). LLMs can thus be viewed as probabilistic production systems that sample a possible completion each time they are called, e.g., X â
2309.02427#16
2309.02427#18
2309.02427
[ "2305.14909" ]
2309.02427#18
Cognitive Architectures for Language Agents
¼â ¼â ¸ X Y . This probabilistic form offers both advantages and disadvantages compared to traditional production systems. The primary disadvantage of LLMs is their inherent opaqueness: while production systems are defined by discrete and human-legible rules, LLMs consist of billions of uninterpretable parameters. This opaqueness â coupled with inherent randomness from their probabilistic formulation â makes it challenging to analyze or systematically control their behaviors (Romero et al., 2023; Valmeekam et al., 2022). Nonetheless, their scale and pre-training provide massive advantages over traditional production systems. LLMs pre-trained on large-scale internet data learn a remarkably effective prior over string completions, allowing them to solve a wide range of tasks out of the box (Huang et al., 2022b).
2309.02427#17
2309.02427#19
2309.02427
[ "2305.14909" ]
2309.02427#19
Cognitive Architectures for Language Agents
# 3.2 Prompt engineering as control flow The weights of an LLM define a prioritization over output strings (completions), conditioned by the input string (the prompt). The resulting distribution can be interpreted as a task-specific prioritization of productions â in other words, a simple control flow. Tasks such as question answering can be formulated directly as an input string (the question), yielding conditional distributions over completions (possible answers). Early work on few-shot learning (Brown et al., 2020) and prompt engineering (Wei et al., 2022b; Kojima et al., 2022; Xu et al., 2023c) found that the LLM could be further biased towards high-quality productions 3In this work, we focus on autoregressive LLMs which are typically used for language agents. However, bidirectional LLMs such as BERT (Devlin et al., 2019) can be seen in a similar light: they define a distribution over in-filling productions. 4Alternatively, we can treat the prompt as input and take the output of the LLM as the next state, represented by the production X â
2309.02427#18
2309.02427#20
2309.02427
[ "2305.14909" ]
2309.02427#20
Cognitive Architectures for Language Agents
Y â a more literal form of rewriting. 6 # A oe, B © Gers) Go) construction i | aa ~ | Answer |â >| Critique |â >| Refinement Answer vim |e] Act im â â D roe Self-Critique Inner Monologue | LLM calls String parsing oe) Chain / â eason |â »| Act Agent , Selection Inference Answer Re Execution Selection-Inference ReAct Figure 3: From language models to language agents. A: Basic structure of an LLM call.
2309.02427#19
2309.02427#21
2309.02427
[ "2305.14909" ]
2309.02427#21
Cognitive Architectures for Language Agents
Prompt construction selects a template and populates it with variables from working memory. After calling the LLM, the string output is parsed into an action space and executed. An LLM call may result in one or more actions â for example, returning an answer, calling a function, or issuing motor commands. B: Prompt chaining techniques such as Self-Critique (Wang et al., 2022b) or Selection-Inference (Creswell et al., 2023) use a pre-defined sequence of LLM calls to generate an output. C: Language agents such as Inner Monologue (Huang et al., 2022c) and ReAct (Yao et al., 2022b) instead use an interactive feedback loop with the external environment. Vision-language models (VLMs) can be used to translate perceptual data into text for the LLM to process. by pre-processing the input string. These simple manipulations â typically concatenating additional text to the input â can themselves be seen as productions, meaning that these methods define a sequence of productions (Table 1). Later work extended these approaches to dynamic, context-sensitive prompts: for example, selecting few-shot examples that are maximally relevant to the input (Liu et al., 2021) or populating a template with external observations from video (Zeng et al., 2022) or databases (Lewis et al., 2020). For a survey of such prompting techniques, see Liu et al. (2023c). Subsequent work used the LLM itself as a pre-processing step, eliciting targeted reasoning to foreground a particular aspect of the problem (Bai et al., 2022; Jin et al., 2022; Ganguli et al., 2023; Madaan et al., 2023; Saunders et al., 2022; Kim et al., 2023; Kirk et al., 2023) or generate intermediate reasoning steps (Tafjord et al., 2021; Creswell et al., 2023; Yao et al., 2023) before returning an answer. Chaining multiple calls to an LLM (Wu et al., 2022a;b; Dohan et al., 2022) allows for increasingly complicated algorithms (Fig. 3). # 3.3 Towards cognitive language agents
2309.02427#20
2309.02427#22
2309.02427
[ "2305.14909" ]
2309.02427#22
Cognitive Architectures for Language Agents
Language agents move beyond pre-defined prompt chains and instead place the LLM in a feedback loop with the external environment (Fig. 1B). These approaches first transform multimodal input into text and pass it to the LLM. The LLMâ s output is then parsed and used to determine an external action (Fig. 3C). Early agents interfaced the LLM directly with the external environment, using it to produce high-level instructions based on the agentâ s state (Ahn et al., 2022; Huang et al., 2022c; Dasgupta et al., 2022). Later work developed more sophisticated language agents that use the LLM to perform intermediate reasoning before selecting an action (Yao et al., 2022b). The most recent agents incorporate sophisticated learning strategies such as reflecting on episodic memory to generate new semantic inferences (Shinn et al., 2023) or modifying their program code to generate procedural knowledge (Wang et al., 2023a), using their previous experience to adapt their future behaviors. These cognitive language agents employ nontrivial LLM-based reasoning and learning (Fig. 1C). Just as cognitive architectures were used to structure production systemsâ interactions with agentsâ internal state and external environments, we suggest that they can help design LLM-based cognitive agents. In the remainder of the paper, we use this perspective to organize existing approaches and highlight promising extensions.
2309.02427#21
2309.02427#23
2309.02427
[ "2305.14909" ]
2309.02427#23
Cognitive Architectures for Language Agents
7 A Procedural Memory Semantic Memory â _ Episodic Memory B M6 = (- LLM Agent Code =) 4 | ! iN | Ly | iN rompt) Parse.) (Retrieval) (Learning) (Retrieval ) (Learning ) (Retrieval) (Learning) I Y y I y | y I : > : = TTT Ne Decision Procedure â t Working Meme ; Actions Observations Selection â Q @ Dialogue Physical 2Qge Planning v U(U Proposal v tL) Execution Digital Figure 4: Cognitive architectures for language agents (CoALA). A: CoALA defines a set of interacting modules and processes. The decision procedure executes the agentâ s source code. This source code consists of procedures to interact with the LLM (prompt templates and parsers), internal memories (retrieval and learning), and the external environment (grounding). B: Temporally, the agentâ s decision procedure executes a decision cycle in a loop with the external environment. During each cycle, the agent uses retrieval and reasoning to plan by proposing and evaluating candidate learning or grounding actions. The best action is then selected and executed. An observation may be made, and the cycle begins again.
2309.02427#22
2309.02427#24
2309.02427
[ "2305.14909" ]
2309.02427#24
Cognitive Architectures for Language Agents
# 4 Cognitive Architectures for Language Agents (CoALA): A Conceptual Framework We present Cognitive Architectures for Language Agents (CoALA) as a framework to organize existing language agents and guide the development of new ones. CoALA positions the LLM as the core component of a larger cognitive architecture (Figure 4). Under CoALA, a language agent stores information in memory modules (Section 4.1), and acts in an action space structured into external and internal parts (Figure 5):
2309.02427#23
2309.02427#25
2309.02427
[ "2305.14909" ]
2309.02427#25
Cognitive Architectures for Language Agents
â ¢ External actions interact with external environments (e.g., control a robot, communicate with a human, navigate a website) through grounding (Section 4.2). â ¢ Internal actions interact with internal memories. Depending on which memory gets accessed and whether the access is read or write, internal actions can be further decomposed into three kinds: retrieval (read from long-term memory; Section 4.3), reasoning (update the short-term working memory with LLM; Section 4.4), and learning (write to long-term memory; Section 4.5). Language agents choose actions via decision-making, which follows a repeated cycle (Section 4.6, Figure 4B). In each cycle, the agent can use reasoning and retrieval actions to plan. This planning subprocess selects a grounding or learning action, which is executed to affect the outside world or the agentâ s long-term memory. CoALAâ s decision cycle is analogous to a programâ s â mainâ
2309.02427#24
2309.02427#26
2309.02427
[ "2305.14909" ]
2309.02427#26
Cognitive Architectures for Language Agents
procedure (a method without return values, as opposed to functions) that runs in loops continuously, accepting new perceptual input and calling various action procedures in response. CoALA (Figure 4) is inspired by the decades of research in cognitive architectures (Section 2.3), leveraging key concepts such as memory, grounding, learning, and decision-making. Yet the incorporation of an LLM leads to the addition of â reasoningâ actions, which can flexibly produce new knowledge and heuristics for various purposes â replacing hand-written rules in traditional cognitive architectures. It also makes text the de facto internal representation, streamlining agentsâ memory modules. Finally, recent advances in vision-language
2309.02427#25
2309.02427#27
2309.02427
[ "2305.14909" ]
2309.02427#27
Cognitive Architectures for Language Agents
8 Internal External A A oC me Reasoning Retrieval | Learning Grounding Planning Figure 5: Agentsâ action spaces can be divided into internal memory accesses and external interactions with the world. Reasoning and retrieval actions are used to support planning. models (VLMs; Alayrac et al., 2022) can simplify grounding by providing a straightforward translation of perceptual data into text (Zeng et al., 2022). The rest of this section details key concepts in CoALA: memory, actions (grounding, reasoning, retrieval, and learning), and decision-making. For each concept, we use existing language agents (or relevant NLP/RL methods) as examples â or note gaps in the literature for future directions.
2309.02427#26
2309.02427#28
2309.02427
[ "2305.14909" ]
2309.02427#28
Cognitive Architectures for Language Agents
# 4.1 Memory Language models are stateless: they do not persist information across calls. In contrast, language agents may store and maintain information internally for multi-step interaction with the world. Under the CoALA framework, language agents explicitly organize information (mainly textural, but other modalities also allowed) into multiple memory modules, each containing a different form of information. These include short-term working memory and several long-term memories: episodic, semantic, and procedural. Working memory. Working memory maintains active and readily available information as symbolic variables for the current decision cycle (Section 4.6). This includes perceptual inputs, active knowledge (generated by reasoning or retrieved from long-term memory), and other core information carried over from the previous decision cycle (e.g., agentâ s active goals). Previous methods encourage the LLM to generate intermediate reasoning (Wei et al., 2022b; Nye et al., 2021), using the LLMâ s own context as a form of working memory. CoALAâ s notion of working memory is more general: it is a data structure that persists across LLM calls. On each LLM call, the LLM input is synthesized from a subset of working memory (e.g., a prompt template and relevant variables). The LLM output is then parsed back into other variables (e.g., an action name and arguments) which are stored back in working memory and used to execute the corresponding action (Figure 3A). Besides the LLM, the working memory also interacts with long-term memories and grounding interfaces. It thus serves as the central hub connecting different components of a language agent. Episodic memory. Episodic memory stores experience from earlier decision cycles. This can consist of training input-output pairs (Rubin et al., 2021), history event flows (Weston et al., 2014; Park et al., 2023), game trajectories from previous episodes (Yao et al., 2020; Tuyls et al., 2022), or other representations of the agentâ s experiences. During the planning stage of a decision cycle, these episodes may be retrieved into working memory to support reasoning. An agent can also write new experiences from working to episodic memory as a form of learning (Section 4.5). Semantic memory. Semantic memory stores an agentâ s knowledge about the world and itself.
2309.02427#27
2309.02427#29
2309.02427
[ "2305.14909" ]
2309.02427#29
Cognitive Architectures for Language Agents
Traditional NLP or RL approaches that leverage retrieval for reasoning or decision-making initialize semantic memory from an external database for knowledge support. For example, retrieval-augmented methods in NLP (Lewis et al., 2020; Borgeaud et al., 2022; Chen et al., 2017) can be viewed as retrieving from a semantic memory of unstructured text (e.g., Wikipedia). In RL, â reading to learnâ approaches (Branavan et al., 2012; Narasimhan et al., 2018; Hanjie et al., 2021; Zhong et al., 2021) leverage game manuals and facts as a semantic memory to affect the policy. While these examples essentially employ a fixed, read-only semantic memory, language agents may also write new knowledge obtained from LLM reasoning into semantic memory as a form of learning (Section 4.5) to incrementally build up world knowledge from experience. Procedural memory. Language agents contain two forms of procedural memory: implicit knowledge stored in the LLM weights, and explicit knowledge written in the agentâ s code. The agentâ s code can be further 9 divided into two types: procedures that implement actions (reasoning, retrieval, grounding, and learning procedures), and procedures that implement decision-making itself (Section 4.6). During a decision cycle, the LLM can be accessed via reasoning actions, and various code-based procedures can be retrieved and executed. Unlike episodic or semantic memory that may be initially empty or even absent, procedural memory must be initialized by the designer with proper code to bootstrap the agent. Finally, while learning new actions by writing to procedural memory is possible (Section 4.5), it is significantly riskier than writing to episodic or semantic memory, as it can easily introduce bugs or allow an agent to subvert its designersâ
2309.02427#28
2309.02427#30
2309.02427
[ "2305.14909" ]
2309.02427#30
Cognitive Architectures for Language Agents
intentions. # 4.2 Grounding actions Grounding procedures execute external actions and process environmental feedback into working memory as text. This effectively simplifies the agentâ s interaction with the outside world as a â text gameâ with textual observations and actions. We categorize three kinds of external environments: Physical environments. Physical embodiment is the oldest instantiation envisioned for AI agents (Nilsson, 1984). It involves processing perceptual inputs (visual, audio, tactile) into textual observations (e.g., via pre-trained captioning models), and affecting the physical environments via robotic planners that take language-based commands. Recent advances in LLMs have led to numerous robotic projects (Ahn et al., 2022; Liang et al., 2023a; Singh et al., 2023; Palo et al., 2023; Ren et al., 2023) that leverage LLMs as a â
2309.02427#29
2309.02427#31
2309.02427
[ "2305.14909" ]
2309.02427#31
Cognitive Architectures for Language Agents
brainâ for robots to generate actions or plans in the physical world. For perceptual input, vision-language models are typically used to convert images to text (Alayrac et al., 2022; Sumers et al., 2023) providing additional context for the LLM (Driess et al., 2023; Huang et al., 2023; Brohan et al., 2022; 2023). Dialogue with humans or other agents. Classic linguistic interactions allow the agent to accept instructions (Winograd, 1972; Tellex et al., 2011; Chen and Mooney, 2011; Bisk et al., 2016) or learn from people (Nguyen et al., 2021; Sumers et al., 2022; 2021; Wang et al., 2016). Agents capable of generating language may ask for help (Ren et al., 2023; Nguyen et al., 2022b; 2019; Nguyen and Daumé III, 2019) or clarification (Biyik and Palan, 2019; Sadigh et al., 2017; Padmakumar et al., 2022; Thomason et al., 2020; Narayan-Chen et al., 2019) â or entertain or emotionally help people (Zhang et al., 2020; Zhou et al., 2018; Pataranutaporn et al., 2021; Hasan et al., 2023; Ma et al., 2023). Recent work also investigates interaction among multiple language agents for social simulation (Park et al., 2023; Jinxin et al., 2023; Gao et al., 2023), debate (Chan et al., 2023; Liang et al., 2023b; Du et al., 2023), improved safety (Irving et al., 2018), or collabrative task solving (Qian et al., 2023; Wu et al., 2023; Hong et al., 2023). Digital environments.
2309.02427#30
2309.02427#32
2309.02427
[ "2305.14909" ]
2309.02427#32
Cognitive Architectures for Language Agents
This includes interacting with games (Hausknecht et al., 2020; Côté et al., 2019; Shridhar et al., 2020; Wang et al., 2022a; Liu et al., 2023d), APIs (Schick et al., 2023; Yao et al., 2022b; Parisi et al., 2022; Tang et al., 2023b), and websites (Shi et al., 2017; Nakano et al., 2021; Yao et al., 2022a; Zhou et al., 2023b; Gur et al., 2023; Deng et al., 2023) as well as general code execution (Yang et al., 2023; Le et al., 2022; Ni et al., 2023). Such digital grounding is cheaper and faster than physical or human interaction. It is thus a convenient testbed for language agents and has been studied with increasing intensity in recent years. In particular, for NLP tasks that require augmentation of external knowledge or computation, stateless digital APIs (e.g., search, calculator, translator) are often packaged as â toolsâ (Parisi et al., 2022; Schick et al., 2023; Xu et al., 2023a; Tang et al., 2023b; Qin et al., 2023), which can be viewed as special â single-useâ digital environments.
2309.02427#31
2309.02427#33
2309.02427
[ "2305.14909" ]
2309.02427#33
Cognitive Architectures for Language Agents
# 4.3 Retrieval actions In CoALA, a retrieval procedure (Li et al., 2022a; Gu et al., 2018) reads information from long-term memories into working memory. Depending on the information and memory type, it could be implemented in various ways, e.g., rule-based, sparse, or dense retrieval. For example, Voyager (Wang et al., 2023a) loads code-based skills from a skill library via dense retrieval to interact with the Minecraft world â effectively retrieving grounding procedures from a procedural memory. Generative Agents (Park et al., 2023) retrieves relevant events from episodic memory via a combination of recency (rule-based), importance (reasoning-based), and relevance (embedding-based) scores. DocPrompting (Zhou et al., 2022a) proposes to leverage library documents to assist code generation, which can be seen as retrieving knowledge from semantic memory. While retrieval plays a key role in human decision-making (Zhou et al., 2023a; Zhao et al., 2022), adaptive
2309.02427#32
2309.02427#34
2309.02427
[ "2305.14909" ]
2309.02427#34
Cognitive Architectures for Language Agents
10 and context-specific recall remains understudied in language agents. In Section 6, we suggest a principled integration of decision-making and retrieval as an important future direction. # 4.4 Reasoning actions Reasoning allows language agents to process the contents of working memory to generate new information. Unlike retrieval (which reads from long-term memory into working memory), reasoning reads from and writes to working memory. This allows the agent to summarize and distill insights about the most recent observation (Yao et al., 2022b; Peng et al., 2023), the most recent trajectory (Shinn et al., 2023), or information retrieved from long-term memory (Park et al., 2023). Reasoning can be used to support learning (by writing the results into long-term memory) or decision-making (by using the results as additional context for subsequent LLM calls). # 4.5 Learning actions Learning occurs by writing information to long-term memory, which includes a spectrum of diverse procedures. Updating episodic memory with experience. It is common practice for RL agents to store episodic trajectories to update a parametric policy (Blundell et al., 2016; Pritzel et al., 2017) or establish a non- parametric policy (Ecoffet et al., 2019; Tuyls et al., 2022). For language agents, added experiences in episodic memory may be retrieved later as examples and bases for reasoning or decision-making (Weston et al., 2014; Rubin et al., 2021; Park et al., 2023). Updating semantic memory with knowledge. Recent work (Shinn et al., 2023; Park et al., 2023) has applied LLMs to reason about raw experiences and store the resulting inferences in semantic memory. For example, Reflexion (Shinn et al., 2023) uses an LLM to reflect on failed episodes and stores the results (e.g., â
2309.02427#33
2309.02427#35
2309.02427
[ "2305.14909" ]
2309.02427#35
Cognitive Architectures for Language Agents
there is no dishwasher in kitchenâ ) as semantic knowledge to be attached to LLM context for solving later episodes. Finally, work in robotics (Chen et al., 2023a) uses vision-language models to build a semantic map of the environment, which can later be queried to execute instructions. Updating LLM parameters (procedural memory). The LLM weights represent implicit procedural knowledge. These can be adjusted to an agentâ s domain by fine-tuning during the agentâ s lifetime. Such fine- tuning can be accomplished via supervised (Liu et al., 2023b; Zhang et al., 2023b) or imitation learning (Hussein et al., 2017), reinforcement learning (RL) from environment feedback (Sutton and Barto, 2018), human feedback (RLHF; Christiano et al., 2017; Ouyang et al., 2022; Nakano et al., 2021), or AI feedback (Bai et al., 2022; Liu et al., 2023e). Classic LLM self-improvement methods (Huang et al., 2022a; Zelikman et al., 2022) use an external measure such as consistency Wang et al. (2022b) to select generations to fine-tune on. In reinforcement learning settings, this can be extended to use environmental feedback instead: for example, XTX (Tuyls et al., 2022) periodically fine-tunes a small language model on high-scoring trajectories stored in episodic memory, which serves as a robust â
2309.02427#34
2309.02427#36
2309.02427
[ "2305.14909" ]
2309.02427#36
Cognitive Architectures for Language Agents
exploitationâ policy to reach exploration frontiers in the face of stochasity. Fine-tuning the agentâ s LLM is a costly form of learning; thus, present studies specify learning schedules. However, as training becomes more efficient â or if agents utilize smaller subtask-specific LLMs â it may be possible to allow language agents to autonomously determine when and how to fine-tune their LLMs. Updating agent code (procedural memory). CoALA allows agents to update their source code, thus modifying the implementation of various procedures.
2309.02427#35
2309.02427#37
2309.02427
[ "2305.14909" ]
2309.02427#37
Cognitive Architectures for Language Agents
These can be broken down as follows: â ¢ Updating reasoning (e.g., prompt templates; Gao et al., 2020; Zhou et al., 2022b). For example, APE (Zhou et al., 2022b) infers prompt instructions from input-output examples, then uses these instructions as part of the LLM prompt to assist task solving. Such a prompt update can be seen as a form of learning to reason. â ¢ Updating grounding (e.g., code-based skills; Liang et al., 2023a; Ellis et al., 2021; Wang et al., 2023a). For example, Voyager (Wang et al., 2023a) maintains a curriculum library. Notably, current methods are limited to creating new code skills to interact with external environments.
2309.02427#36
2309.02427#38
2309.02427
[ "2305.14909" ]
2309.02427#38
Cognitive Architectures for Language Agents
11 â ¢ Updating retrieval. To our knowledge, these learning options are not studied in recent language agents. Retrieval is usually considered a basic action designed with some fixed implementation (e.g., BM25 or dense retrieval), but research in query/document expansion (Nogueira et al., 2019; Wang et al., 2023c; Tang et al., 2023a) or retrieval distillion (Izacard et al., 2021) may be helpful for language agents to learn better retrieval procedures.
2309.02427#37
2309.02427#39
2309.02427
[ "2305.14909" ]
2309.02427#39
Cognitive Architectures for Language Agents
â ¢ Updating learning or decision-making. Finally, it is theoretically possible for CoALA agents to learn new procedures for learning or decision-making, thus providing significant adaptability. In general, however, updates to these procedures are risky both for the agentâ s functionality and alignment. At present, we are not aware of any language agents that implement this form of learning; we discuss such possibilities more in Section 6. While RL agents usually fix one way of learning (e.g., Q-learning, PPO, or A3C) and learn by updating model parameters, language agents can select from a diversity of learning procedures. This allows them to learn rapidly by storing task-relevant language (cheaper and quicker than parameter updates), and leverage multiple forms of learning to compound their self-improvement (e.g., Generative Agents discussed in Section 5). Finally, while our discussion has mostly focused on adding to memory, modifying and deleting (a case of â unlearningâ ) are understudied in recent language agents. We address these areas more in Section 6. # 4.6 Decision making With various actions (grounding, learning, reasoning, retrieval) in the action space, how should a language agent choose which action to apply? This is handled by the decision-making procedure, which is effectively the top-level or â
2309.02427#38
2309.02427#40
2309.02427
[ "2305.14909" ]
2309.02427#40
Cognitive Architectures for Language Agents
mainâ agent program. CoALA structures this top-level program into decision cycles (Figure 4B) which yield an external grounding action (Section 4.2) or internal learning action (Section 4.5). In each cycle, program code defines a sequence of reasoning and retrieval actions to propose and evaluate alternatives (planning stage), then executes the selected action (execution stage) â then the cycle loops again. Planning stage. During planning, reasoning and retrieval can be flexibly applied to propose, evaluate, and select actions, and these sub-stages could interleave or iterate to build up multi-step simulations (Tamari et al., 2020) before taking an external action (Yao et al., 2023; Hao et al., 2023). It also enables agents to iteratively improve candidate solutions â for example, by using the LLM to simulate them, identifying defects, and proposing modifications that address those defects (Kirk et al., 2023; Shinn et al., 2023).
2309.02427#39
2309.02427#41
2309.02427
[ "2305.14909" ]
2309.02427#41
Cognitive Architectures for Language Agents
â ¢ Proposal. The proposal sub-stage generates one or more action candidates. The usual approach is to use reasoning (and optionally retrieval) to sample one (Huang et al., 2022c) or more (Chen et al., 2021; Wang et al., 2022b) external grounding actions from the LLM. For simple domains with limited actions, the proposal stage might simply include all actions (e.g., SayCan in Section 5). More sophisticated agents use if-else or while-if code structures (Wang et al., 2023a; Park et al., 2023); while agents deployed in well-defined domains may utilize structured simulators (Haslum et al., 2019) to generate plausible rollouts (Liu et al., 2023a; Dagan et al., 2023).
2309.02427#40
2309.02427#42
2309.02427
[ "2305.14909" ]
2309.02427#42
Cognitive Architectures for Language Agents
â ¢ Evaluation. If multiple actions are proposed, the evaluation sub-stage assigns a value to each. This may use heuristic rules, LLM (perplexity) values (Ahn et al., 2022), learned values (Yao et al., 2020), LLM reasoning (Yao et al., 2023; Hao et al., 2023), or some combination. Particularly, LLM reasoning can help evaluate actions by internally simulating their grounding feedback from the external world (Hao et al., 2023; Yang et al., 2023).
2309.02427#41
2309.02427#43
2309.02427
[ "2305.14909" ]
2309.02427#43
Cognitive Architectures for Language Agents
â ¢ Selection. Given a set of actions and their values, the selection step either selects one to execute or rejects them and loops back to the proposal step. Depending on the form of action values, selection may occur via argmax, softmax, or an alternative such as majority vote (Wang et al., 2022b). Execution. The selected action is applied by executing the relevant procedures from the agentâ s source code. Depending on the agent implementation, this might be an external grounding action (e.g., an API call;
2309.02427#42
2309.02427#44
2309.02427
[ "2305.14909" ]
2309.02427#44
Cognitive Architectures for Language Agents
12 SayCan (Ahn et al., 2022) ReAct (Yao et al., 2022b) Voyager (Wang et al., 2023a) Generative Agents (Park et al., 2023) Tree of Thoughts (Yao et al., 2023) Long-term Memory5 - - procedural episodic/semantic - External Grounding physical digital digital digital/agent digital6 Internal Actions - reason reason/retrieve/learn reason/retrieve/learn reason Decision Making evaluate propose propose propose propose, evaluate, select Table 2: Some recent language agents cast into the CoALA framework. Section 4.2) or an internal learning action (e.g., a write to episodic memory; Section 4.5). An observation can be made from the environment, providing feedback from the agentâ s action, and the cycle loops again. Empirically, many early language agents simply use LLMs to propose an action (Schick et al., 2023), a sequence of actions (Huang et al., 2022b), or evaluate a fixed set of actions (Ahn et al., 2022) without intermediate reasoning or retrieval. Followup work (Yao et al., 2022b; Shinn et al., 2023; Xu et al., 2023b; Lin et al., 2023; Wang et al., 2023a; Park et al., 2023) has exploited intermediate reasoning and retrieval to analyze the situation, make and maintain action plans, refine the previous action given the environmental feedback, and leveraged a more complex procedure to propose a single action. Most recently, research has started to investigate more complex decision-making employing iterative proposal and evaluation to consider multiple actions. These procedures are modeled after classical planning algorithms: for example, Tree of Thoughts (Yao et al., 2023) and RAP (Hao et al., 2023) use LLMs to implement BFS/DFS and Monte Carlo Tree Search (MCTS; Browne et al., 2012) respectively. LLMs are used to generate proposals (i.e., to simulate rollouts conditioned on an action) and evaluate them (i.e., to value the outcome of the proposed action). # 5 Case Studies With variations and ablations of the memory modules, action space, and decision-making procedures, CoALA can express a wide spectrum of language agents.
2309.02427#43
2309.02427#45
2309.02427
[ "2305.14909" ]
2309.02427#45
Cognitive Architectures for Language Agents
Table 2 lists some popular recent methods across diverse domains â from Minecraft to robotics, from pure reasoning to social simulacra. CoALA helps characterize their internal mechanisms and reveal their similarities and differences in a simple and structured way. SayCan (Ahn et al., 2022) grounds a language model to robotic interactions in a kitchen to satisfy user commands (e.g., â I just worked out, can you bring me a drink and a snack to recover?â ). Its long-term memory is procedural only (an LLM and a learned value function). The action space is external only â a fixed set of 551 grounding skills (e.g., â
2309.02427#44
2309.02427#46
2309.02427
[ "2305.14909" ]
2309.02427#46
Cognitive Architectures for Language Agents
find the appleâ , â go to the tableâ ), with no internal actions of reasoning, retrieval, or learning. During decision-making, SayCan evaluates each action using a combination of LLM and learned values, which balance a skillâ s usefulness and groundedness. SayCan therefore employs the LLM (in conjunction with the learned value function) as a single-step planner. ReAct (Yao et al., 2022b) is a language agent grounded to various digital environments (e.g., Wikipedia API, text game, website). Like SayCan, it lacks semantic or episodic memory and therefore has no retrieval or learning actions. Its action space consists of (internal) reasoning and (external) grounding. Its decision cycle is fixed to use a single reasoning action to analyze the situation and (re)make action plans, then generates a grounding action without evaluation or selection stages. ReAct can be considered the simplest language agent that leverages both internal and external actions, and is the initial work that demonstrates their synergizing effects: reasoning helps guide acting, while acting provides environmental feedback to support reasoning. Voyager (Wang et al., 2023a) is a language agent grounded to the Minicraft API. Unlike SayCan, which grounds to perception via the learned value function, Voyagerâ s grounding is text-only. It has a long-term procedural memory that stores a library of code-based grounding procedures a.k.a. skills (e.g., â
2309.02427#45
2309.02427#47
2309.02427
[ "2305.14909" ]
2309.02427#47
Cognitive Architectures for Language Agents
combatZombieâ , â craftStoneSwordâ ). This library is hierarchical: complex skills can use simpler skills as sub-procedures (e.g., â combatZombieâ may call â craftStoneSwordâ if no sword is in inventory). Most impressively, its action space has all four kinds of actions: grounding, reasoning, retrieval, and learning (by adding new grounding 5All agents contain some procedural memory (agent code and LLM weights), so here we only list writable procedural memory. 6Special digital grounding with the only external action being submitting a final answer.
2309.02427#46
2309.02427#48
2309.02427
[ "2305.14909" ]
2309.02427#48
Cognitive Architectures for Language Agents
13 procedures). During a decision cycle, Voyager first reasons to propose a new task objective if it is missing in the working memory, then reasons to propose a code-based grounding procedure to solve the task. In the next decision cycle, Voyager reasons over the environmental feedback to determine task completion. If successful, Voyager selects a learning action adding the grounding procedure to procedural memory; otherwise, it uses reasoning to refine the code and re-executes it. The importance of long-term memory and procedural learning is empirically verified by comparing to baselines like ReAct and AutoGPT and ablations without the procedural memory. Voyager is shown to better explore areas, master the tech tree, and zero-shot generalize to unseen tasks. Generative Agents (Park et al., 2023) are language agents grounded to a sandbox game affording interaction with the environment and other agents. Its action space also has all four kinds of actions: grounding, reasoning, retrieval, and learning. Each agent has a long-term episodic memory that stores events in a list. These agents use retrieval and reasoning to generate reflections on their episodic memory (e.g., â
2309.02427#47
2309.02427#49
2309.02427
[ "2305.14909" ]
2309.02427#49
Cognitive Architectures for Language Agents
I like to ski now.â ) which are then written to long-term semantic memory. During decision-making, it retrieves relevant reflections from semantic memory, then reasons to make a high-level plan of the day. While executing the plan, the agent recieves stream of grounding observations; it can reason over these to maintain or adjust the plan. Tree of Thoughts (ToT) (Yao et al., 2023) can be seen as a special kind of language agent with only one external action: submitting a final solution to a reasoning problem (game of 24, creative writing, crosswords puzzle). It has no long-term memory, and only reasoning in its internal action space, but differs from all previous agents in its deliberate decision-making. During planning, ToT iteratively proposes, evaluates, and selects â thoughtsâ (reasoning actions) based on LLM reasoning, and systematically maintains them via a tree search algorithm to enable global exploration as well as local backtrack and foresight. # 6 Actionable Insights Compared to some recent empirical surveys around language agents (Mialon et al., 2023; Weng, 2023; Wang et al., 2023b), CoALA offers a theoretical framework grounded in the well-established research of cognitive architectures. This leads to a unique and complementary set of actionable insights. Agent design: thinking beyond monolithic designs for individual applications. Perhaps our most important suggestion is that agents should follow a systematic, modular design. CoALA can help practitioners in this regard: for example, it may be beneficial to consider whether an application requires semantic or episodic memory; whether the agent should be capable of modifying its semantic memory; and so on. Practically, just as standardized software is used across robotics platforms (Quigley, 2009; Macenski et al., 2022), a framework for language agents would consolidate technical investment and improve compatibility.
2309.02427#48
2309.02427#50
2309.02427
[ "2305.14909" ]
2309.02427#50
Cognitive Architectures for Language Agents
â ¢ In academic research, standardized terms allow conceptual comparisons across works (Table 2), and open-source implementations would further facilitate modular plug-and-play and re-use. For example, the theoretical framework of Markov Decision Processes (Puterman, 2014) provides a standardized set of concepts and terminology (e.g., state, action, reward, transition) for reinforcement learning (Sutton and Barto, 2018). Correspondingly, empirical frameworks like OpenAI Gym (Brockman et al., 2016) provided standardized abstractions (e.g., obs, reward, done, info = env.step(action)) that facilitate empirical RL work. Thus, it would be timely and impactful to also implement useful abstractions (e.g., Memory, Action, Agent classes) for language agents, and cast simpler agents into such an empirical CoALA framework as examples for building more complex agents.
2309.02427#49
2309.02427#51
2309.02427
[ "2305.14909" ]
2309.02427#51
Cognitive Architectures for Language Agents
â ¢ In industry applications, maintaining a single company-wide â language agent libraryâ would reduce technical debt (Sculley et al., 2014; Lwakatare et al., 2020) by facilitating systematic testing and component re-use across individual agent deployments. It could also standardize the customer experience: rather than interacting with a hodgepodge of language agents developed by individual teams, end users would experience a context-specific instantiation of the same base agent. â ¢ LLMs vs. code in agent design. CoALA agents possess two forms of procedural memory: agent code (deterministic rules) and LLM parameters (a large, stochastic production system). Agent code is interpretable and extensible, but often brittle in face of stochasticity and limited to address situations
2309.02427#50
2309.02427#52
2309.02427
[ "2305.14909" ]
2309.02427#52
Cognitive Architectures for Language Agents
14 the designer anticipates. In contrast, LLM parameters are hard to interpret, but offer significant zero-shot flexibility in new contexts (Huang et al., 2022b). CoALA thus suggests using code sparingly to implement generic algorithms that complement LLM limitations, e.g., implementing tree search to mitigate myopia induced by autoregressive generation (Yao et al., 2023; Hao et al., 2023). Structured reasoning: thinking beyond prompt engineering. Early work on prompt engineering manipulated the LLMâ s input and output via low-level string operations. CoALA suggests a more structured reasoning procedure to update working memory variables. â ¢ Prompting frameworks like LangChain (LangChain, 2022) and LlamaIndex (LlamaIndex, 2023) can be used to define higher-level sequences of reasoning steps, reducing the burden of reasoning per LLM call and the low-level prompt crafting efforts. Structural output parsing solutions such as Guidance (Guidance, 2023) and OpenAI function calling (OpenAI, 2023b) can help update working memory variables systematically. Defining and building good working memory modules will also be an important direction of future research. Such modules may be especially important for industry solutions where LLM reasoning needs to seamlessly integrate with large-scale code infrastructure.
2309.02427#51
2309.02427#53
2309.02427
[ "2305.14909" ]
2309.02427#53
Cognitive Architectures for Language Agents
â ¢ Reasoning usecases in agents can inform and reshape LLM training in terms of the types (e.g., reasoning for self-evaluation, reflection, action generation, etc.) and formats (e.g. ,CoT (Wei et al., 2022b), ReAct (Yao et al., 2022b), Reflexion (Shinn et al., 2023)) of training instances. By default, existing LLMs are trained and optimized for NLP tasks, but agent applications have explored new modes of LLM reasoning (e.g., self-evaluation) that have proven broadly useful. LLMs trained or finetuned towards these capabilities will more likely be the backbones of future agents. Long-term memory: thinking beyond retrieval augmentation. While traditional retrieval-augmented language models (Guu et al., 2020; Lewis et al., 2020; Borgeaud et al., 2022) only read from human-written corpora, memory-augmented language agents can both read and write self-generated content autonomously. This opens up numerous possibilities for efficient lifelong learning.
2309.02427#52
2309.02427#54
2309.02427
[ "2305.14909" ]
2309.02427#54
Cognitive Architectures for Language Agents
â ¢ Combining existing human knowledge with new experience and skills can help agents bootstrap to learn efficiently. For example, a code-writing agent could be endowed with semantic programming knowledge in the form of manuals or textbooks. It could then generate its own episodic knowledge from experience; reflect on these experiences to generate new semantic knowledge; and gradually create procedural knowledge in the form of a code library storing useful methods. â ¢ Integrating retrieval and reasoning can help to better ground planning. Recent computational psychological models implicate an integrated process of memory recall and decision-making (Zhou et al., 2023a; Zhao et al., 2022) â suggesting that adaptive mechanisms interleaving memory search and forward simulation will allow agents to make the most of their knowledge. Learning: thinking beyond in-context learning or finetuning.
2309.02427#53
2309.02427#55
2309.02427
[ "2305.14909" ]
2309.02427#55
Cognitive Architectures for Language Agents
CoALAâ s definition of â learningâ encompasses these methods, but extends further to storing new experience or knowledge, or writing new agent code (Section 4.5). Important future directions include: â ¢ Meta-learning by modifying agent code would allow agents to learn more effectively. For example, learning better retrieval procedures could enable agents to make better use of their experience. Recent expansion-based techniques (Nogueira et al., 2019; Wang et al., 2023c; Tang et al., 2023a) could allow agents to reason about when certain knowledge would be useful, and store this as metadata to facilitate later recall. These forms of meta-learning would enable agents to go beyond human-written code, yet are understudied due to their difficulty and risk.
2309.02427#54
2309.02427#56
2309.02427
[ "2305.14909" ]
2309.02427#56
Cognitive Architectures for Language Agents
â ¢ New forms of learning (and unlearning) could include fine-tuning smaller models for specific reasoning sub-tasks (Zelikman et al., 2022; Huang et al., 2022a; Ahn et al., 2022), deleting unneeded memory items for â unlearningâ (Nguyen et al., 2022c), and studying the interaction effects between multiple forms of learning (Tuyls et al., 2022; Park et al., 2023; Xie et al., 2023; Khattab et al., 2022).
2309.02427#55
2309.02427#57
2309.02427
[ "2305.14909" ]