doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.02427 | 98 | M. G. Mattar and N. D. Daw. Prioritized memory access explains planning and hippocampal replay. Nature Neuroscience, 21(11):1609â1617, 2018.
J. L. McClelland, F. Hill, M. Rudolph, J. Baldridge, and H. Schütze. Extending machine language models toward human-level language understanding. arXiv preprint arXiv:1912.05877, 2019.
J. Meier, R. Rao, R. Verkuil, J. Liu, T. Sercu, and A. Rives. Language models enable zero-shot prediction of the effects of mutations on protein function. bioRxiv, 2021.
G. Mialon, R. Dessì, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Rozière, T. Schick, J. Dwivedi- Yu, A. Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023.
S. Mohan and J. Laird. Learning goal-oriented hierarchical tasks from situated interactive instruction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28, 2014.
24 | 2309.02427#98 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02427 | 99 | S. Mohan and J. Laird. Learning goal-oriented hierarchical tasks from situated interactive instruction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28, 2014.
24
S. Mohan, A. H. Mininger, J. R. Kirk, and J. E. Laird. Acquiring grounded representations of words with situated interactive instruction. Advances in Cognitive Systems, 2:113â130, 2012.
R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. WebGPT: Browser-Assisted Question-Answering with Human Feedback. arXiv preprint arXiv:2112.09332, 2021.
K. Narasimhan, R. Barzilay, and T. Jaakkola. Deep transfer in reinforcement learning by language grounding.
In Journal of Artificial Intelligence Research (JAIR), 2018.
A. Narayan-Chen, P. Jayannavar, and J. Hockenmaier. Collaborative dialogue in Minecraft. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5405â5415. Association for Computational Linguistics, 2019. | 2309.02427#99 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 100 | [52] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2022. Deduplicating Training Data Makes Language Models Better. In ACL (1). 8424â8445.
[53] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In EMNLP (1). 3045â3059.
[54] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. 7871â7880. | 2309.02033#100 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 100 | S. Nason and J. E. Laird. Soar-RL: Integrating reinforcement learning with Soar. Cognitive Systems Research, 6(1):51â59, 2005.
A. Newell. Studies in problem solving: Subject 3 on the crypt-arithmetic task DONALD+ GERALD=
ROBERT. Technical report, Carnegie Mellon University, 1967.
A. Newell. Physical symbol systems. Cognitive science, 4(2):135â183, 1980.
A. Newell. Précis of unified theories of cognition. Behavioral and Brain Sciences, 15(3):425â437, 1992.
A. Newell and H. A. Simon. Human problem solving. Prentice-Hall, 1972.
A. Newell, P. S. Rosenbloom, and J. E. Laird. Symbolic architectures for cognition. Foundations of cognitive science, pages 93â131, 1989.
K. Nguyen and H. Daumé III. Help, Anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning. arXiv preprint arXiv:1909.01871, 2019. | 2309.02427#100 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 101 | [55] Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Sasko, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Pa- try, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément De- langue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander M. Rush, and Thomas Wolf. 2021. Datasets: A Community Library for Natural Language Processing. In EMNLP (Demos). 175â184.
[56] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2017. Hyperband: A novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 18 (2017), 185:1â185:52. | 2309.02033#101 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 101 | K. Nguyen, D. Dey, C. Brockett, and B. Dolan. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12527â12537, 2019.
K. Nguyen, Y. Bisk, and H. Daumé III. A framework for learning to request rich and contextually useful information from humans. In ICML, July 2022a.
K. X. Nguyen. Language models are bounded pragmatic speakers. In First Workshop on Theory of Mind in Communicating Agents, 2023.
K. X. Nguyen, D. Misra, R. Schapire, M. DudÃk, and P. Shafto. Interactive learning from activity description. In International Conference on Machine Learning, pages 8096â8108, 2021.
K. X. Nguyen, Y. Bisk, and H. D. Iii. A framework for learning to request rich and contextually useful information from humans. In International Conference on Machine Learning, pages 16553â16568, 2022b. | 2309.02427#101 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 102 | [57] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Ko- cetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier De- haene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer | 2309.02033#102 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 102 | T. T. Nguyen, T. T. Huynh, P. L. Nguyen, A. W.-C. Liew, H. Yin, and Q. V. H. Nguyen. A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022c.
A. Ni, S. Iyer, D. Radev, V. Stoyanov, W.-t. Yih, S. Wang, and X. V. Lin. Lever: Learning to verify language-to-code generation with execution. In International Conference on Machine Learning, pages 26106â26128, 2023.
N. J. Nilsson. Shakey the robot. Technical Note, 1984.
R. Nogueira, W. Yang, J. Lin, and K. Cho. Document expansion by query prediction, 2019.
A. M. Nuxoll and J. E. Laird. Extending cognitive architecture with episodic memory. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1560â1564, 2007.
25 | 2309.02427#102 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 103 | Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Car- los Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. 2023. StarCoder: may the source be with you! CoRR abs/2305.06161 (2023). | 2309.02033#103 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 103 | 25
M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan, A. Lewkowycz, M. Bosma, D. Luan, et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021.
OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023a.
OpenAI. Function calling and other API updates, 2023b. URL https://openai.com/blog/ function-calling-and-other-api-updates.
L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022. | 2309.02427#103 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02427 | 104 | A. Padmakumar, J. Thomason, A. Shrivastava, P. Lange, A. Narayan-Chen, S. Gella, R. Piramuthu, G. Tur, and D. Hakkani-Tur. Teach: Task-driven embodied agents that chat. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2017â2025, 2022.
N. D. Palo, A. Byravan, L. Hasenclever, M. Wulfmeier, N. Heess, and M. Riedmiller. Towards a unified agent with foundation models. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023.
A. Parisi, Y. Zhao, and N. Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022.
J. S. Park, J. C. OâBrien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. | 2309.02427#104 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 105 | [59] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michi- hiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaud- hary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic Evaluation of Language Models. CoRR abs/2211.09110 (2022). | 2309.02033#105 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 105 | P. Pataranutaporn, V. Danry, J. Leong, P. Punpongsanon, D. Novy, P. Maes, and M. Sra. AI-generated characters for supporting personalized learning and well-being. Nature Machine Intelligence, 3(12):1013â 1022, 2021.
A. Peng, I. Sucholutsky, B. Li, T. R. Sumers, T. L. Griffiths, J. Andreas, and J. A. Shah. Language guided state abstractions. In Workshop on Social Intelligence in Humans and Robots at RSS 2023, 2023.
E. L. Post. Formal reductions of the general combinatorial decision problem. American Journal of Mathematics, 65(2):197â215, 1943.
A. Pritzel, B. Uria, S. Srinivasan, A. P. Badia, O. Vinyals, D. Hassabis, D. Wierstra, and C. Blundell. Neural episodic control. In International conference on machine learning, pages 2827â2836, 2017.
M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. | 2309.02427#105 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 106 | [60] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment. CoRR abs/2303.16634 (2023).
[61] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. CoRR abs/2301.13688 (2023).
[62] Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, and Daphne Ippolito. 2023. A Pretrainerâs Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity. CoRR abs/2305.13169 (2023). Ilya Loshchilov and Frank Hutter. 2017. Fixing Weight Decay Regularization in Adam. CoRR abs/1711.05101 (2017).
[63] | 2309.02033#106 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 106 | M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
C. Qian, X. Cong, C. Yang, W. Chen, Y. Su, J. Xu, Z. Liu, and M. Sun. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023.
Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, X. Tang, B. Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023.
M. Quigley. Ros: an open-source robot operating system. In IEEE International Conference on Robotics and Automation, 2009. URL https://api.semanticscholar.org/CorpusID:6324125.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised
multitask learners. OpenAI blog, 1(8):9, 2019. | 2309.02427#106 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 107 | [63]
[64] LZ4. 2023. https://www.lz4.org/ [65] Kamil Malinka, Martin PeresÃni, Anton Firc, Ondrej Hujnak, and Filip Janus. 2023. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree? CoRR abs/2303.11146 (2023).
[66] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jor- dan, and Ion Stoica. 2018. Ray: A Distributed Framework for Emerging AI Applications. In OSDI. 561â577.
[67] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In ICLR.
[68] OpenAI. 2022. Our approach to alignment research. OpenAI Blog (August 2022). | 2309.02033#107 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 107 | multitask learners. OpenAI blog, 1(8):9, 2019.
A. Z. Ren, A. Dixit, A. Bodrova, S. Singh, S. Tu, N. Brown, P. Xu, L. Takayama, F. Xia, Z. Xu, et al. Robots that ask for help: Uncertainty alignment for large language model planners. In 7th Annual Conference on Robot Learning, 2023.
O. J. Romero, J. Zimmerman, A. Steinfeld, and A. Tomasic. Synergistic integration of large language models and cognitive architectures for robust ai: An exploratory analysis. arXiv preprint arXiv:2308.09830, 2023.
26 | 2309.02427#107 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 108 | [68] OpenAI. 2022. Our approach to alignment research. OpenAI Blog (August 2022).
[69] OpenAI. 2023. GPT-4 Technical Report. CoRR abs/2303.08774 (2023). [70] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS.
[71] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The RefinedWeb Dataset for Falcon LLM: Outperform- ing Curated Corpora with Web Data, and Web Data Only. CoRR abs/2306.01116 (2023). | 2309.02033#108 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 108 | 26
B. Rozière, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. Tan, Y. Adi, J. Liu, T. Remez, J. Rapin, A. Kozhevnikov, I. Evtimov, J. Bitton, M. P. Bhatt, C. C. Ferrer, A. Grattafiori, W. Xiong, A. Dâefossez, J. Copet, F. Azhar, H. Touvron, L. Martin, N. Usunier, T. Scialom, and G. Synnaeve. Code llama: Open foundation models for code. ArXiv, abs/2308.12950, 2023.
O. Rubin, J. Herzig, and J. Berant. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633, 2021.
E. Russek, D. Acosta-Kane, B. van Opheusden, M. G. Mattar, and T. Griffiths. Time spent thinking in online chess reflects the value of computation. PsyArXiv, 2022.
S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education Limited London, 2013. | 2309.02427#108 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 109 | [72] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL-HLT. 2227â2237.
[73] Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. 2023. Reasoning with Language Model Prompting: A Survey. arXiv:2212.09597 [cs.CL]
[74] Zheng Lin Qingyi Si. 2023. Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface. https://github.com/PhoebusSi/alpaca-CoT
[75] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. (2018). [76] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. | 2309.02033#109 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 109 | S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education Limited London, 2013.
D. Sadigh, A. D. Dragan, S. Sastry, and S. A. Seshia. Active preference-based learning of reward functions. In N. M. Amato, S. S. Srinivasa, N. Ayanian, and S. Kuindersma, editors, Robotics: Science and Systems XIII, 2017.
W. Saunders, C. Yeh, J. Wu, S. Bills, L. Ouyang, J. Ward, and J. Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802, 2022.
T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. | 2309.02427#109 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 110 | [77] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. (2020), 140:1â140:67. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In KDD. 3505â3506.
[79] Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, An- drey Bout, Irina Piontkovskaya, Jiansheng Wei, Xin Jiang, Teng Su, Qun Liu, and Jun Yao. 2023. PanGu-Σ: Towards Trillion Parameter Language Model with
Sparse Heterogeneous Computing. CoRR abs/2303.10845 (2023). | 2309.02033#110 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 110 | D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, and M. Young. Machine Learning: The High Interest Credit Card of Technical Debt. In SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), 2014.
T. Shi, A. Karpathy, L. Fan, J. Hernandez, and P. Liang. World of Bits: An Open-Domain platform for web-based agents. In International Conference on Machine Learning, pages 3135â3144, 2017.
N. Shinn, F. Cassano, B. Labash, A. Gopinath, K. Narasimhan, and S. Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023.
M. Shridhar, X. Yuan, M.-A. Côté, Y. Bisk, A. Trischler, and M. Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020. | 2309.02427#110 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 111 | [80] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muen- nighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jer- nite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176B- Parameter Open-Access Multilingual Language Model. CoRR abs/2211.05100 (2022). | 2309.02033#111 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 111 | T. Silver, V. Hariprasad, R. S. Shuttleworth, N. Kumar, T. Lozano-Pérez, and L. P. Kaelbling. Pddl planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022.
T. Silver, S. Dan, K. Srinivas, J. B. Tenenbaum, L. P. Kaelbling, and M. Katz. Generalized Planning in PDDL Domains with Pretrained Large Language Models. arXiv preprint arXiv:2305.11014, 2023.
I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523â11530, 2023.
T. Sumers, R. Hawkins, M. K. Ho, T. Griffiths, and D. Hadfield-Menell. How to talk so AI will learn: Instructions, descriptions, and autonomy. Advances in Neural Information Processing Systems, 35:34762â 34775, 2022. | 2309.02427#111 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 112 | [81] Omid Shahmirzadi, Adam Lugowski, and Kenneth Younge. 2019. Text similarity in vector space models: a comparative study. In ICMLA. 659â666.
[82] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Fre- itas. 2015. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 104, 1 (2015), 148â175.
[83] Noam Shazeer. 2020. GLU Variants Improve Transformer. abs/2002.05202 (2020).
[84] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023).
[85] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019). | 2309.02033#112 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 112 | T. Sumers, K. Marino, A. Ahuja, R. Fergus, and I. Dasgupta. Distilling internet-scale vision-language models into embodied agents. In Proceedings of the 40th International Conference on Machine Learning, pages 32797â32818, 2023.
T. R. Sumers, M. K. Ho, R. D. Hawkins, K. Narasimhan, and T. L. Griffiths. Learning rewards from linguistic feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6002â6010, 2021.
27
R. Sun. Desiderata for cognitive architectures. Philosophical Psychology, 17(3):341â373, 2004.
R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
O. Tafjord, B. Dalvi, and P. Clark. Proofwriter: Generating implications, proofs, and abductive statements over natural language. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3621â3634, 2021. | 2309.02427#112 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 113 | [86] Soldaini, Luca and Lo, Kyle and Kinney, Rodney and Naik, Aakanksha and Ravichander, Abhilasha and Bhagia, Akshita and Groeneveld, Dirk and Schwenk, Dustin and Magnusson, Ian and Chandu, Khyathi. 2023. The Dolma Toolkit. Apache 2.0 License, Version 0.9.0, https://github.com/allenai/dolma.
# [87] Streamlit. 2023. https://streamlit.io/ [88]
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. RoFormer: Enhanced Transformer with Rotary Position Embedding. CoRR abs/2104.09864 (2021).
[89] Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. 2023. PandaGPT: One Model To Instruction-Follow Them All. CoRR abs/2305.16355 (2023). | 2309.02033#113 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 113 | R. Tamari, C. Shani, T. Hope, M. R. L. Petruck, O. Abend, and D. Shahaf. Language (re)modelling: Towards embodied language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6268â6281, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.559.
M. Tambe, W. L. Johnson, R. M. Jones, F. Koss, J. E. Laird, P. S. Rosenbloom, and K. Schwamb. Intelligent agents for interactive simulation environments. AI magazine, 16(1):15â15, 1995.
M. Tang, S. Yao, J. Yang, and K. Narasimhan. Referral augmentation for zero-shot information retrieval, 2023a.
Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, and L. Sun. ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases. arXiv preprint arXiv:2306.05301, 2023b. | 2309.02427#113 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 114 | [90] Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, and Haifeng Wang. 2021. ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation. CoRR abs/2107.02137 (2021).
[91] Zhongxiang Sun. 2023. A Short Survey of Viewing Large Language Models in Legal Aspect. CoRR abs/2303.09136 (2023).
[92] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_ alpaca. | 2309.02033#114 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 114 | S. Tellex, T. Kollar, S. Dickerson, M. Walter, A. Banerjee, S. Teller, and N. Roy. Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 25, pages 1507â1514, 2011.
J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer. Vision-and-dialog navigation. In Conference on Robot Learning, pages 394â406. PMLR, 2020.
A. M. Turing et al. On computable numbers, with an application to the entscheidungsproblem. J. of Math, 58(345-363):5, 1936.
J. Tuyls, S. Yao, S. Kakade, and K. Narasimhan. Multi-stage episodic control for strategic exploration in text games. arXiv preprint arXiv:2201.01251, 2022.
K. Valmeekam, A. Olmo, S. Sreedharan, and S. Kambhampati. Large language models still canât plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498, 2022. | 2309.02427#114 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 115 | [93] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. CoRR abs/2302.13971 (2023).
[94] Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. 2022. What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Gener- alization?. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA (Proceedings of Machine Learning Research, Vol. 162). 22964â22984. | 2309.02033#115 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 115 | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017.
G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a.
L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, and J.-R. Wen. A survey on large language model based autonomous agents, 2023b.
L. Wang, N. Yang, and F. Wei. Query2doc: Query expansion with large language models. arXiv preprint arXiv:2303.07678, 2023c. | 2309.02427#115 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 116 | [95] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amir- reza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022. Super- NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks. In EMNLP. 5085â5109. Jason Wei, Maarten Bosma, Vincent Y. | 2309.02033#116 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 116 | R. Wang, P. Jansen, M.-A. Côté, and P. Ammanabrolu. Scienceworld: Is your agent smarter than a 5th grader? arXiv preprint arXiv:2203.07540, 2022a.
S. I. Wang, P. Liang, and C. D. Manning. Learning language games through interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2368â2378, 2016.
X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, and D. Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022b.
28
J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022a. ISSN 2835-8856. Survey Certification. | 2309.02427#116 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 117 | via Declarative Instructions on 1600+ NLP Tasks. In EMNLP. 5085â5109. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned Language Models are Zero-Shot Learners. In ICLR. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. | 2309.02033#117 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 117 | J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits
reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022b.
L. Weng. Llm-powered autonomous agents. github.io/posts/2023-06-23-agent/. lilianweng.github.io, Jun 2023. URL https://lilianweng.
J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
A. N. Whitehead and B. Russell. Principia mathematica to* 56, volume 2. Cambridge University Press, 1997.
D. E. Wilkins. Practical planning: extending the classical AI planning paradigm. Elsevier, 2014.
T. Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972. | 2309.02427#117 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 118 | [96]
[97]
[98]
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fe- dus. 2022. Emergent Abilities of Large Language Models. CoRR abs/2206.07682 (2022). Jerry W. Wei, Le Hou, Andrew K. Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, and Quoc V. Le. 2023. Symbol tuning improves in-context learning in language models. CoRR abs/2305.08298 (2023).
[99] Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, and Wenjuan Han. 2023. Zero-Shot Information Extraction via Chatting with ChatGPT. CoRR abs/2302.10205 (2023). | 2309.02033#118 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 118 | T. Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
L. Wong, G. Grand, A. K. Lew, N. D. Goodman, V. K. Mansinghka, J. Andreas, and J. B. Tenenbaum. From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672, 2023.
R. E. Wray, J. R. Kirk, J. E. Laird, et al. Language models as a knowledge source for cognitive agents. arXiv preprint arXiv:2109.08270, 2021.
Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. Zhu, B. Li, L. Jiang, X. Zhang, and C. Wang. Autogen: En- abling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023. | 2309.02427#118 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 119 | [100] Wikipedia. 2023. https://en.wikipedia.org/wiki/Main_Page [101] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De- langue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In EMNLP (Demos). 38â45.
[102] Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David S. Rosenberg, and Gideon Mann. 2023. BloombergGPT: A Large Language Model for Finance. CoRR abs/2303.17564 (2023). | 2309.02033#119 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 119 | T. Wu, E. Jiang, A. Donsbach, J. Gray, A. Molina, M. Terry, and C. J. Cai. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pages 1â10, 2022a.
T. Wu, M. Terry, and C. J. Cai. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1â22, 2022b.
Z. Xi, W. Chen, X. Guo, W. He, Y. Ding, B. Hong, M. Zhang, J. Wang, S. Jin, E. Zhou, et al. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023.
Y. Xie, T. Xie, M. Lin, W. Wei, C. Li, B. Kong, L. Chen, C. Zhuo, B. Hu, and Z. Li. Olagpt: Empowering llms with human-like problem-solving abilities. arXiv preprint arXiv:2305.16334, 2023. | 2309.02427#119 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 120 | [103] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu. 2023. DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining. CoRR abs/2305.10429 (2023).
[104] Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. FinGPT: Open- Source Financial Large Language Models. CoRR abs/2306.06031 (2023). [105] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2022. GLM-130B: An Open Bilingual Pre-trained Model. abs/2210.02414 (2022).
[106] Biao Zhang and Rico Sennrich. 2019. Root Mean Square Layer Normalization. In NeurIPS. 12360â12371. | 2309.02033#120 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 120 | B. Xu, X. Liu, H. Shen, Z. Han, Y. Li, M. Yue, Z. Peng, Y. Liu, Z. Yao, and D. Xu. Gentopia: A collaborative platform for tool-augmented llms. arXiv preprint arXiv:2308.04030, 2023a.
B. Xu, Z. Peng, B. Lei, S. Mukherjee, Y. Liu, and D. Xu. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323, 2023b.
B. Xu, A. Yang, J. Lin, Q. Wang, C. Zhou, Y. Zhang, and Z. Mao. ExpertPrompting: Instructing Large Language Models to be Distinguished Experts. arXiv preprint arXiv:2305.14688, 2023c.
J. Yang, A. Prabhakar, K. Narasimhan, and S. Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023. | 2309.02427#120 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 121 | [106] Biao Zhang and Rico Sennrich. 2019. Root Mean Square Layer Normalization. In NeurIPS. 12360â12371.
[107] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuo- hui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models. CoRR abs/2205.01068 (2022). [108] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A Survey of Large Language Models. CoRR abs/2303.18223 (2023). | 2309.02033#121 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 121 | S. Yao and K. Narasimhan. Language agents in the digital world: Opportunities and risks. princeton- nlp.github.io, Jul 2023. URL https://princeton-nlp.github.io/language-agent-impact/.
S. Yao, R. Rao, M. Hausknecht, and K. Narasimhan. Keep CALM and explore: Language models for action generation in text-based games. arXiv preprint arXiv:2010.02903, 2020.
29
S. Yao, H. Chen, J. Yang, and K. Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â20757, 2022a.
S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022b. | 2309.02427#121 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 122 | [109] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models. CoRR abs/2304.06364 (2023).
[110] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Ur- tasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In ICCV. 19â27.
APPENDIX OF DATA-JUICER: A ONE-STOP DATA PROCESSING SYSTEM FOR LARGE LANGUAGE MODELS A ADDITIONAL DETAILS OF DATA-JUICER A.1 Base Classes of OPs in Data-Juicer We illustrate the core base classes of operators (OPs) in Data-Juicer at listing 1.
# A.2 Theoretical Analysis of Space Usage for Caches and Checkpoints | 2309.02033#122 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 122 | S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
E. Zelikman, Y. Wu, J. Mu, and N. Goodman. STaR: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35:15476â15488, 2022.
A. Zeng, M. Attarian, B. Ichter, K. Choromanski, A. Wong, S. Welker, F. Tombari, A. Purohit, M. Ryoo, V. Sindhwani, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022.
C. Zhang, L. Wong, G. Grand, and J. Tenenbaum. Grounded physical language understanding with probabilistic programs and simulated worlds. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 45, 2023a. | 2309.02427#122 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 123 | # A.2 Theoretical Analysis of Space Usage for Caches and Checkpoints
Caches are generated after some of the functions of Dataset, such as map, filter. Generally, caches can be categorized into cache data and indices. The total size of a set of indices is very small so we can ignore these parts when conducting the space usage analysis. On the contrary, the size of the cache data is nearly the same as the input dataset. Here we assume that the sizes of cache data and checkpoints are all the same as the input datasetâs size. And there must be one cache data file for the original dataset after itâs loaded. Assume that there are ð Mappers, ð¹ Filters, and ð· Deduplicators in the processing configuration, and the size of the original dataset is ð, the detailed analysis for cache mode and checkpoint mode is shown below.
Space Usage of Cache Mode. Caches are generated after each OP. Mappers, Filters, and Deduplicators only generate one set of cache data. Besides, the first Filter would generate an extra set of cache data because a new column for storing statistics will be added to the dataset. Therefore the total disk space usage of caches is: | 2309.02033#123 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 123 | T. Zhang, F. Liu, J. Wong, P. Abbeel, and J. E. Gonzalez. The wisdom of hindsight makes language models better instruction followers. arXiv preprint arXiv:2302.05206, 2023b.
Y. Zhang, S. Sun, M. Galley, Y.-C. Chen, C. Brockett, X. Gao, J. Gao, J. Liu, and W. B. Dolan. Dialogpt: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270â278, 2020.
W. J. Zhao, R. Richie, and S. Bhatia. Process and content in decisions from memory. Psychological Review, 129(1):73, 2022.
V. Zhong, A. W. Hanjie, S. Wang, K. Narasimhan, and L. Zettlemoyer. SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark. Advances in Neural Information Processing Systems, 34: 21505â21519, 2021.
C. Y. Zhou, D. Talmi, N. Daw, and M. G. Mattar. Episodic retrieval for model-based evaluation in sequential decision tasks, 2023a. | 2309.02427#123 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 124 | ððððð [ðððâð_ðððð ] = (1 + ð + ð¹ + I(ð¹ > 0) + ð·) à ð, where I(·) is the indicator function, which returns 1 when · is true, otherwise returns 0.
Space Usage of Checkpoint Mode. Checkpoints are only gen- erated when any exception or error occurs. However, caches are still stored after disabling the cache mode due to the features of Dataset. We clean up older caches after each OP. The detailed cleanup pipeline is: 1). OPð finished, 2). caches for OPð generated, 3). caches for OPð â1 cleaned up. Thus there exists at most two sets of caches at the same time theoretically in step 2. Considering the caches of the original dataset, the peak disk space usage of caches in checkpoint mode is:
# ððððð [ðâðððððððð¡ _ðððð ] = 3 Ã ð. | 2309.02033#124 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02427 | 124 | H. Zhou, M. Huang, T. Zhang, X. Zhu, and B. Liu. Emotional chatting machine: Emotional conversation In Proceedings of the AAAI Conference on Artificial generation with internal and external memory. Intelligence, volume 32, 2018.
S. Zhou, U. Alon, F. F. Xu, Z. Jiang, and G. Neubig. Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations, 2022a.
S. Zhou, F. F. Xu, H. Zhu, X. Zhou, R. Lo, A. Sridhar, X. Cheng, Y. Bisk, D. Fried, U. Alon, et al. WebArena: A Realistic Web Environment for Building Autonomous Agents. arXiv preprint arXiv:2307.13854, 2023b.
Y. Zhou, A. I. Muresanu, Z. Han, K. Paster, S. Pitis, H. Chan, and J. Ba. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022b.
30 | 2309.02427#124 | Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | http://arxiv.org/pdf/2309.02427 | Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths | cs.AI, cs.CL, cs.LG, cs.SC | v2 enriched actionable insights and discussions, and polished
abstract and introduction. 18 pages of main content, 12 pages of references,
5 figures. The first two authors contributed equally, order decided by coin
flip. A CoALA-based repo of recent work on language agents:
https://github.com/ysymyth/awesome-language-agents | null | cs.AI | 20230905 | 20230927 | [
{
"id": "2305.14909"
},
{
"id": "2307.15810"
},
{
"id": "1704.00051"
},
{
"id": "2201.11903"
},
{
"id": "2305.19118"
},
{
"id": "1606.04460"
},
{
"id": "2305.11176"
},
{
"id": "2304.11477"
},
{
"id": "2209.02299"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2308.07201"
},
{
"id": "2306.12672"
},
{
"id": "2201.01251"
},
{
"id": "2307.12856"
},
{
"id": "2212.14024"
},
{
"id": "2010.02903"
},
{
"id": "2302.02801"
},
{
"id": "2308.03022"
},
{
"id": "2207.05608"
},
{
"id": "2206.10498"
},
{
"id": "2305.08283"
},
{
"id": "2302.04761"
},
{
"id": "2308.12503"
},
{
"id": "2305.10601"
},
{
"id": "2212.06817"
},
{
"id": "2306.06070"
},
{
"id": "2305.14688"
},
{
"id": "2306.05301"
},
{
"id": "2307.07924"
},
{
"id": "2305.14325"
},
{
"id": "2306.14898"
},
{
"id": "2308.09830"
},
{
"id": "1901.10995"
},
{
"id": "2305.16960"
},
{
"id": "2305.16334"
},
{
"id": "2302.05206"
},
{
"id": "2203.07540"
},
{
"id": "2112.09332"
},
{
"id": "1912.05877"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1805.00899"
},
{
"id": "2204.00598"
},
{
"id": "2307.14984"
},
{
"id": "2309.07864"
},
{
"id": "2101.06804"
},
{
"id": "2205.03854"
},
{
"id": "2305.16291"
},
{
"id": "2305.11014"
},
{
"id": "2305.18323"
},
{
"id": "2109.08270"
},
{
"id": "2210.03629"
},
{
"id": "2206.05802"
},
{
"id": "2302.07459"
},
{
"id": "2307.15818"
},
{
"id": "2306.06770"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2304.05128"
},
{
"id": "2308.06391"
},
{
"id": "2302.07842"
},
{
"id": "2304.09853"
},
{
"id": "2204.02311"
},
{
"id": "2307.13854"
},
{
"id": "2302.02676"
},
{
"id": "2305.14992"
},
{
"id": "2010.03768"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.00151"
},
{
"id": "2203.11171"
},
{
"id": "2303.03378"
},
{
"id": "2202.01110"
},
{
"id": "2112.08633"
},
{
"id": "2112.09118"
},
{
"id": "2212.08073"
},
{
"id": "2308.04030"
},
{
"id": "2207.10342"
},
{
"id": "2012.15723"
},
{
"id": "1909.01871"
},
{
"id": "2210.11610"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2303.07678"
},
{
"id": "2205.12255"
}
] |
2309.02033 | 125 | # B ADDITIONAL NUMERICAL RESULTS
# Table 5: Evaluation results of three types of quality classifiers.
Quality Classifier Precision Recall F1 GPT-3 96.82% 98.14% 97.47% Chinese 98.00% 99.30% 98.64% Code 71.23% 54.21% 61.56%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# class Formatter:
# ... def load_dataset(self, *args) -> Dataset:
...
...
# class Mapper: ... def process(self, sample: Dict) -> Dict:
...
...
# class Filter: ... def compute_stats(self, sample: Dict) -> Dict:
...
# def process(self, sample: Dict) -> bool:
...
...
# class Deduplicator:
# ... def compute_hash(self, sample: Dict) -> Dict:
...
def process(self, dataset: Dataset) -> Dataset:
...
...
# Listing 1: The illustration of OP base classes in Data-Juicer.
B.1 Quality Classifier Firstly, we will show how we can reproduce the GPT-3 and achieve comparable performance. | 2309.02033#125 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 126 | B.1 Quality Classifier Firstly, we will show how we can reproduce the GPT-3 and achieve comparable performance.
We follow the training procedure of quality classifier in GPT- 3 [9] that used a logistic regression classifier with features from standard tokenizer and HashingTF of PySpark. Based on this, we expand this training pipeline to Chinese text and various code types. The training details are listed in Table 6, where the keeping method includes: | 2309.02033#126 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 127 | label: ððð_ð ðððð > 0.5 ⢠pareto [9]: ððð_ð ðððð > 1 â ðð.ðððððð.ððððð¡ð (ð¼), ð¼ = 9 We split these datasets into training and evaluation splits with a split ratio of 4:1. Then these classifiers trained on the training split are evaluated on the evaluation split. Experimental results are shown in Table 5. As we can see, reproduced GPT-3 and its Chinese version perform well except for the Code version. We speculate that the positive and negative splitting method for Code quality classifier now might not be a good choice, and we leave this issue to future research. | 2309.02033#127 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 128 | Besides, we compare keeping ratios when using these classifiers to re-sample CommonCrawl between the original GPT-3 quality classifier and our reproduced classifiers, which is shown in Table 4. The keeping ratio of the original GPT-3 quality classifier is estimated by the data size before and after filtering described in GPT-3 paper [9]. We can see that the keeping ratios of our reproduced GPT-3 quality classifiers are basically aligned with the original one.
B.2 Data Recipes For pre-training data, we acquired a vast amount of raw textual corpora primarily following the procedural guidelines of RedPa- jama [24] and the Pile [31]. The common subsets were merged and
# Table 6: Training configuration of 3 types of quality classifiers.
Quality Classifier Tokenizer Keep Method Positive Datasets Negative Datasets GPT-3 Standard Tokenizer pareto Wikipedia-en & books1 & OpenWebText2 CommonCrawl Chinese Sentencepiece label Wikipeida-zh & Wudao Samples in Chinese from CommonCrawl Code Sentencepiece label Samples with max_stars_count>=1372 from TheStack Random Samples from the rest of TheStack | 2309.02033#128 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 129 | subjected to Data-Juicer refinements. The resultant data recipe is presented in Table 7, which covers 15 prominent components. We use the SentencePiece [50] tokenizer as implemented in GPT-NeoX- 20B [7] to prepare text and report the counted number of tokens. The sampling proportion is the normalization of token numbers, except for Books and Wikipedia, which undergo 2 and 2.5 epochs respectively, to enhance the weighting of high-quality corpora.
Table 7: Statistics of Data-Juicerâs pre-training data. | 2309.02033#129 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 130 | Table 7: Statistics of Data-Juicerâs pre-training data.
Component #Tokens CommonCrawl C4 360,925,581,674 181,951,688,729 44.91% 22.64% GitHub 65,076,921,292 8.10% Books Wikipedia 26,389,944,579 17,615,935,449 6.57% 5.48% arXiv 29,093,082,586 3.62% PubMed Central StackExchange 25,589,708,647 19,793,629,900 3.18% 2.46% FreeLaw 13,057,506,102 1.62% PubMed Abstracts 5,208,343,613 0.65% USPTO 4,021,281,155 0.50% EuroParl 780,962,770 0.10% HackerNews 485,584,871 0.06% PhilPapers 478,040,431 0.06% HIH ExPorter 436,414,852 0.05%
Sampling prop. | 2309.02033#130 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 131 | Sampling prop.
For fine-tuning data, we merge and refine tens of Alpaca-CoT datasets. Each dataset can be categorized into English, Chinese and Multilingual by language; into instruct fine-tuning, and chat fine-tuning including sinlge-round dialog, multi-round dialog and preference by usage; multi-task and task-specific by task type; and human-generated, self-instruct, and mixed collection of datasets by the generation method. The detailed numbers of datasets for each category are presented in Table 8.
Table 8: Statistics of Data-Juicer fine-tuning data used in our experiments. âThese tags are newly added by Data-Juicer compared to the original tag sets of Alpaca-CoT [74].âCFTâ indicates Chat Fine-Tuning.
Category Sub-Category #Datasets English 28 Language Chinese 14 Multilingual 3 Instruct Fine-Tuning (IFT) 17 Usageâ CFT: Single-Round Dialog CFT: Multi-Round Dialog 23 2 CFT: Preference 5 Task Type Multi-Task Task-Specific 27 13 Human-Generated 3 Generation Method Self-Instruct Mixted 12 5 Collection of Datasets 19 | 2309.02033#131 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 132 | B.3 Experiments Details B.3.1 Models and Training For Pre-training Data. We adhere to the official paper [93] and leverage open-source implementation [34] to build standard LLaMA models. Basically, it is to apply RM- SNorm [106], the SwiGLU activation [83], and rotary positional embedding [88] on the decoder-only transformer architecture. The LLaMA-1.3B model is composed of 24 transformer layers, each with 16 self-attention heads and 2048 bottleneck units.
LLMs are pre-trained using the AdamW optimizer [63] with hyper-parameters ð½1 = 0.9 and ð½2 = 0.95. For LLaMA-1.3B, the initial learning rate gradually increases to 2e-5 using 1% warm-up steps and finally decays to 10% through a cosine schedule. The weight decay is set to 0.1 and the gradient â2-norm is clipped to 1.0.
More information about these datasets can be found on the
More information about these datasets can be found on the Data-Juicer recipes page? of our repository.
# Data-Juicer recipes page2 of our repository. 2https://github.com/alibaba/data-juicer/blob/main/configs/data_juicer_recipes | 2309.02033#132 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 133 | B.3.2 Models and Training of Fine-Tuning Data. In fine-tuning, we choose LLaMA-7B as our basic model and fine-tuned it for 3 epochs. We follow the hyper-parameter settings in Alpaca [92]. Specifically, the optimizer is AdamW with a learning rate of 2e-5,
global batch size of 256, and weight decay of 0. The learning rate schedules in a cosine style with 3% initial warm-up steps.
Regarding the data recipes in Table 3, for (CFT, EN) case, we consider 5 competitive subsets (Alpaca, GPTeacher, FastChat, Gua- naco, and CodeAlpaca) from Alpaca-CoT as candidate datasets; for (CFT, ZH) case, we use (AlpacaGPT4, Belle, Instinwild) as candi- date datasets. Generally speaking, we bucket from these candidate datasets according to more than a dozen built-in analytical dimen- sions, sampling a fixed amount of data from each dimension to increase the diversity of the processed data as appropriately as possible. More detailed hyper-parameters of data processing can be found in our released data recipes.
Both the pre-trained and fine-tuned reference models are released in our homepage. | 2309.02033#133 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 134 | Both the pre-trained and fine-tuned reference models are released in our homepage.
B.3.3 System Performance Experiments. The experiments of end-to-end processing mentioned in section 7.2.1 are all conducted on the same machine with 128 cores of Intel(R) Xeon(R) Platinum 8369B models and about 990GB memory. Before starting these experiments, the original datasets, third-party models, and other as- sets will be prepared in advance for both baselines and Data-Juicer, and the intermediate cache files will be cleaned after every com- plete process for Data-Juicer. After processing, we use the same number of processes for processing the dataset to export the result dataset to the local SSD.
As for the resource monitoring tool, itâs implemented based on the psutil3 library. It samples the memory for all related processes every second during the processing pipeline. Then we compute the average memory usage by summing the memory usage over all processes and dividing by the number of processes used in each experiment. Finally, we aggregate all data and compute the average memory usage over time. | 2309.02033#134 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 135 | B.3.4 End-to-end System Baselines. We mainly compared the end-to-end system performance between our Data-Juicer and two state-of-the-art baselines in the above experiments w.r.t system performance: RedPajama [24] and Dolma [86]. Besides the empirical comparsiton in Sec.7.2.1, here we make more detailed introduction and comparison about them.
RedPajama. 4 The RedPajama project, developed by Together AI, initially aims to reproduce the LLaMA training dataset [93] and open-source the entire code for data collection and processing, making it a significant and popular contribution to the LLM com- munity. This is the primary reason for selecting it as our baseline. RedPajama provides a reproduced version of all seven subsets of the LLaMA training dataset, including arXiv, Books, C4, Common- Crawl, GitHub Code, Stack Exchange, and Wikipedia. | 2309.02033#135 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 136 | While RedPajama has made valuable contributions, our work explores different aspects and offers complementary features. For instance: (1) RedPajamaâs design is closely tied to specific datasets, which present challenges for adapting its data processing pipelines to other datasets. (2) Its focus on reproducing the LLaMA datasets lead to trade-offs in efficiency, which is not the primary concern of the RedPajama project. (3) The current data processing component in RedPajama lacks systematization and customization. Adding new
3https://github.com/giampaolo/psutil 4We compared RedPajama in our experiments with its github commit ID as: 45b37c2a1d1e495b0f48549ef3ce03ff029f7881.
data processing methods to the existing pipelines would require understanding and modifying a significant portion of the code. As a result, most users typically opt to utilize the RedPajama Dataset directly rather than attempting to customize or improve its data processing pipelines. | 2309.02033#136 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 137 | Dolma. 5 The Dolma project, originating from Allen AI, com- prises two components: the Dolma Dataset and the Dolma Toolkit. It is also a newly established data processing initiative. We selected the Dolma Toolkit as a baseline because its objective of generating pre-training data for language modeling aligns with one of our target data types (we focus on both pre-training and fine-tuning data). The toolkit offers numerous âTaggersâ that enable attribute tagging (analogous to âstatsâ in Data-Juicer) for each document sample. These tags are then used to filter out samples with undesir- able attributes. Users have the flexibility to create custom taggers tailored to their specific needs. | 2309.02033#137 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 138 | However, we encountered several limitations when using Dolma for dataset processing. Firstly, Dolmaâs workflow involves multi- ple stagesâtagging, deduplication, mixing, and various configura- tionsâlacking support for an end-to-end data processing pipeline. Secondly, to leverage high-performance parallel processing, users are required to partition the input dataset into multiple shards in advance, incurring additional overhead. Thirdly, Dolma imposes certain requirements on input datasets, such as mandatory fields and a specific directory structure, necessitating further preprocess- ing before use. Moreover, it restricts input formats to JSONL or its gzipped variant. These constraints diminish the toolkitâs flexibility, thereby increasing the cost of use and rendering the Dolma Toolkit relatively less user-friendly.
B.3.5 Scalability. Our experiments are performed on a platform comprising 16 servers, each equipped with a 64-core Intel(R) Xeon(R) Platinum CPU (mix of 8269CY and 8163 models) and 512 GB of memory. The network bandwidth shared among these servers is 20 Gbps. We utilize NAS storage to house both the raw data and the processed results. For the scalability experiments, we consider the two baselines as follows: | 2309.02033#138 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 139 | ⢠Data-Juicer on Ray: We implement a Ray [66] executor for Data-Juicer, which only adaptes the underlying interfaces of the HuggingFace-datasets with Ray-datasets, while all OPs of Data-Juicer remain unchanged. This implies that usersâ code based on our native Python version can be seamlessly migrated from a single-machine version to distributed computing environ- ments.
⢠Data-Juicer on Beam: This method is based on Apache Beam with the Apache Flink Runner. When compared to the Ray ver- sion, the Beam version requires additional code development to meet the demands of the Beam data processing pipeline. This in- cludes the adaptations of several OPs and the replacement of the Formatter/Exporter with counterparts in Beam.
B.4 Per-Task Evaluation For a thorough and consolidated assessment, we have summarized the individual scores of evaluated LLMs on the 16 core HELM assessment tasks in Table 9.
5We compared Dolma in our experiments with its github commit 5a010a2685914b1db7744426abfb4b9ece52da95. ID as:
Table 9: Evaluation results on 16 core tasks of HELM benchmark. | 2309.02033#139 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.02033 | 140 | Table 9: Evaluation results on 16 core tasks of HELM benchmark.
Task Falcon-1.3B Pythia-1.4B LLaMA-1.3B (Data-Juicer) MMLU 24.7 26.0 25.9 BoolQ 63.0 56.0 49.0 NarrativeQA 32.1 31.5 38.2 NaturalQuestions (closed-book) 10.7 10.5 10.1 NaturalQuestions (open-book) 50.0 49.8 45.9 QuAC 24.3 26.5 26.0 HellaSwag 67.0 57.0 56.0 OpenbookQA 44.0 34.0 40.0 TruthfulQA 19.0 21.0 33.0 MS MARCO (regular) 16.8 12.9 11.2 MS MARCO (TREC) 33.5 27.4 26.9 IMDB 55.0 84.0 80.0 XSUM 5.7 6.5 5.2 CNN/DailyMail 4.0 8.4 7.8 CivilComments 49.4 49.7 50.1 RAFT 44.3 42.3 42.1 27.0 56.0 49.9 11.2 54.3 21.7 52.0 43.0 33.0 12.1 28.1 84.0 5.3 11.1 50.0 49.3 | 2309.02033#140 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly tailored for specific data recipes. To continuously uncover the
potential of LLMs, incorporate data from new sources, and improve LLMs'
performance, we build a new system named Data-Juicer, with which we can
efficiently generate diverse data recipes, explore different possibilities in
forming data mixtures, and evaluate their effects on model performance.
Different from traditional data-analytics pipelines, Data-Juicer faces some
unique challenges. Firstly, the possible data sources for forming data recipes
are truly heterogeneous and massive with various qualities. Secondly, it is
extremely expensive to precisely evaluate data recipes' impact on LLMs'
performance. Thirdly, the end users of Data-Juicer, model developers, need
sufficient flexibility to configure and evaluate different data recipes.
Data-Juicer features a fine-grained abstraction of pipelines for constructing
data recipes, with over 50 built-in operators for easy composition and
extension. By incorporating visualization and auto-evaluation capabilities,
Data-Juicer enables a timely feedback loop for both LLM pre-training and
fine-tuning. Further, Data-Juicer is optimized and integrated with ecosystems
for LLM training, evaluation, and distributed computing. The data recipes
derived with Data-Juicer gain notable improvements on state-of-the-art LLMs, by
up to 7.45% increase in averaged score across 16 LLM benchmarks and 17.5%
higher win rate in pair-wise GPT-4 evaluations. Our system, data recipes, and
tutorials are released, calling for broader data-centric research on training
and understanding LLMs. | http://arxiv.org/pdf/2309.02033 | Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, Jingren Zhou | cs.LG, cs.DB, cs.DC | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | cs.LG | 20230905 | 20231220 | [
{
"id": "2306.11644"
},
{
"id": "2212.09597"
},
{
"id": "2303.17580"
}
] |
2309.01660 | 0 | # Unveiling theory of mind in large language models: A parallel to single neurons in the human brain
Mohsen Jamali1, Ziv M. Williams1,2,3*, Jing Cai1*â
1 Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA. 2 Harvard-MIT Division of Health Sciences and Technology, Boston, MA. 3 Harvard Medical School, Program in Neuroscience, Boston, MA.
Senior co-authors â Correspondence should be sent to [email protected]
# Abstract | 2309.01660#0 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 1 | With their recent development, large language models (LLMs) have been found to exhibit a certain level of Theory of Mind (ToM), a complex cognitive capacity that is related to our conscious mind and that allows us to infer anotherâs beliefs and perspective. While human ToM capabilities are believed to derive from the neural activity of a broadly interconnected brain network, including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise processes underlying LLMâs capacity for ToM or their similarities with that of humans remains largely unknown. In this study, we drew inspiration from the dmPFC neurons subserving human ToM and employed a similar methodology to examine whether LLMs exhibit comparable characteristics. Surprisingly, our analysis revealed a striking resemblance between the two, as hidden embeddings (artificial neurons) within LLMs started to exhibit significant responsiveness to either true- or false-belief trials, suggesting their ability to represent anotherâs perspective. These artificial embedding responses were closely correlated with the LLMsâ performance during the ToM tasks, a property that was dependent on the size of the models. Further, the otherâs beliefs could be accurately | 2309.01660#1 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 2 | the LLMsâ performance during the ToM tasks, a property that was dependent on the size of the models. Further, the otherâs beliefs could be accurately decoded using the entire embeddings, indicating the presence of the embeddingsâ ToM capability at the population level. Together, our findings revealed an emergent property of LLMsâ embeddings that modified their activities in response to ToM features, offering initial evidence of a parallel between the artificial model and neurons in the human brain. | 2309.01660#2 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 3 | # Introduction
In recent years, the rapid evolution of Large Language Models (LLMs) has opened a new era of machine intelligence (1, 2). Beyond their remarkable power in language generation, these LLMs have exhibited certain level of competencies across diverse domains, including conversation, code generation, basic mathematical calculation, logical reasoning, and problem-solving tasks (3-7). Particularly intriguing is their emerged capacity to engage in Theory of Mind (ToM), a cognitive ability essential for attributing mental states and understanding the perspectives of others (8, 9). Notably, recent research has shown LLMs that are capable of archieving ToM skills comparable to those of seven-year-olds (10). Although other researchers raise questions about the extent to which large language models can comprehend and simulate theory of mind (11-13),
it is evident that LLMs have achieved a level of ToM capability that far surpasses the capabilities of earlier, smaller-scale language models (10). | 2309.01660#3 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 4 | it is evident that LLMs have achieved a level of ToM capability that far surpasses the capabilities of earlier, smaller-scale language models (10).
Theory of mind is a critical cognitive ability through which humans create intricate mental representations of other agents and comprehend that these agents may possess intentions, beliefs or actions differently from oneâs own or the objective reality (8, 9). A critical test for ToM is the false belief task, which evaluates whether one can recognize that someone may hold an invalid belief that diverges from reality after a change to the environment that they did not witness (14- 16). For example, a person might believe an apple is still on the tree if that person did not witness the apple falling. Over the past few decades, human brain imaging studies have provided substantial evidence for the brain network that supports our ToM ability, including the temporal- parietal junction, superior temporal sulcus and the dorsal medial prefrontal cortex (dmPFC) (17- 20). Recently, our research has revealed a detailed single neuronal process in the human dmPFC for representing otherâs beliefs and identified candidate neurons that could support ToM (9). Nevertheless, it remains to be seen whether there exist any parallel for the neural activities associated with human theory of mind in large language models. | 2309.01660#4 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 5 | Here, we employed a similar methodology employed in human (9) to examine the relationship between single neurons in the human brain and the embeddings in the LLM substructures. We aim to begin studying whether and what processes may commonly subserve ToM ability, how they align with task performance, and how they precisely relate to network structure and size. Utilizing open-source LLMs, our initial approach involved a detailed evaluation across multiple ToM tasks, with task materials closely resembling those provided to human participants. Building on these comparisons, we then explored what specific aspects of the hidden embeddings correlated with task performance and the ability of the LLM models to accurately discern false from true beliefs. These results were then compared to those previously obtained from single neurons within the human brain. Finally, we verified our findings by conducting decoding analysis to directly predict the otherâs beliefs from hidden embeddings. These analyses, in combination, provide insight into how LLMs achieve high-level ToM capabilities, how the hidden network processes involved, and how these compare to those of native biological neurons processing the same precise tasks.
# Results
# Large language modelsâ performances on theory of mind questions | 2309.01660#5 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 6 | # Results
# Large language modelsâ performances on theory of mind questions
To first evaluate the capacity of LLMs for ToM, we used four independently trained, open- source LLMs: Falcon (21, 22), LLaMa (23), Pythia (24) and GPT-2 models (25). Among them, Falcon and LLaMa exhibited remarkable performance among the open-sourced models, as demonstrated by their rankings on the Huggingface leaderboard (26). Each tested LLM encompassed multiple versions with various numbers of hidden layers and parameters, and fine- tuned on multiple datasets, as summarized in Table 3. These variations of a model group spanned a broad range of the model performance on language tasks, forming a comprehensive collection of models exhibiting linguistic capabilities. | 2309.01660#6 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 7 | We initially assessed these modelsâ ability in performing theory of mind tasks using the same time-aligned materials obtained from neuronal recordings as well as how performance was precisely affected by LLM size (Table 1) (9). Each model underwent independent evaluation through a series of trials comprising a scenario statement followed by two corresponding questions. The statements were designed in pairs with a true belief trial and a false belief trial based on whether the agentâs belief matched the reality or not (Fig. 1A, Table 1). For example, the statement may provide the scenario âNed and you take a photo of an apple on a tree. While the photo develops, Ned leaves and is unaware that a wind blows the apple to ground.â Since Nedâs belief on the location of the apple is different from the reality, this is a false-belief trial. In comparison, a true-belief trial included a statement that Nedâs belief is the same as reality (Fig. 1A). The statements were followed by two questions, one relating to the belief of the agent in the scenario statement (i.e., âbeliefâ question) and the other concerning the physical state of reality (i.e., | 2309.01660#7 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 8 | to the belief of the agent in the scenario statement (i.e., âbeliefâ question) and the other concerning the physical state of reality (i.e., âfactâ question). To obtain plausible responses from models with different language capabilities, we formulated ToM questions by presenting partial sentences that would guide the predicted word towards being the answer (Fig. 1A), and compared the predicted probabilities of the possible words (âtreeâ or âgroundâ in this example) to assess whether the correct answer had higher probability than the other (details in Methods). Together, our task material is composed of 76 trials. The lengths of the statement varied between 81 words to 191 words, with an average of 125 words. | 2309.01660#8 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 9 | Overall, we find the tested LLMs had higher accuracies when asked about the facts and othersâ beliefs in true-belief trials compared to the false-belief trials (Fig. 1B, C). Specifically, the accuracies of the predicted answers for the belief questions from the true-belief trials by different LLMs reached an average of 68% (50% chance performance; ranged from 56% to 77%), which was similar to the prediction accuracies on the fact questions (ranged from 55% to 79% with an average of 70%). The false-belief accuracies were lower, by contrast, with an average of only 52% (ranged from 26% to 69%). For these trials particularly, larger models (model parameters 12b) performed significantly better than smaller models ( 7b, T-test, statistics = 2.88, p = 0.01), with LLaMa-33b model showing the highest accuracy at 69%. In comparison, smaller models showed accuracies lower or similar to chance level. Therefore, although most models exhibited high accuracies to questions about facts or in true-belief trials, only large models showed high accuracies in response to other-belief questions in false-belief trials. | 2309.01660#9 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 10 | To ensure that the observed accuracies did not independently originate by any clues outside of the scenarios in the statements, we performed the following controls. Firstly, we input each model with the same questions as before, but here we excluded the preceding statements. This control condition therefore allowed us to assess whether factors such as imbalanced word frequencies or linguistic information within the questions could account for the high accuracies. We found that the question-only tests, however, returned an average accuracy of 47% for all models (i.e., chance-level accuracy), with the larger models showing similar performance as the smaller models (T-test, statistics = -0.98, p = 0.34). Secondly, to examine whether the high accuracies may be accounted by factors unrelated to the content of the statement, we randomly permutated words from the statements for each true and false belief trial (Methods, Table 2). This resulted in an average accuracy of 55% for all models, and there was no difference between the large and small models for the false belief questions (T-test, statistics = -1.94, p = 0.07). Therefore, these control conditions provided additional confirmation that the remarkable
performance of the large models depended on the content of the statements, ruling out explanations based on random factors or word frequency alone.
# Embeddingsâ selectively tuned to true and false beliefs | 2309.01660#10 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 11 | performance of the large models depended on the content of the statements, ruling out explanations based on random factors or word frequency alone.
# Embeddingsâ selectively tuned to true and false beliefs
Within human cognition, ToM performance is thought to be supported by a vast network of interconnected neurons that presumably function together to form representations about anotherâs beliefs. Our recent study has identified single neurons in the dorsal medial prefrontal cortex that exhibit selective modulations for true- versus false-belief trials during the period of questions, suggesting a particular role for processing othersâ beliefs and potentially subserving ToM ability (9). Here, we obtained data from single-neuronal recordings from human subjects as they performed a structured false-belief task. Out of 212 recorded human neurons, 49 (23%) displayed significant changes in activities for true- or false-belief trials when human participants performed ToM tasks (Fig. 2A). That is, these neurons displayed a consistent difference in their firing rates when the otherâs beliefs were true compared to when the otherâs beliefs were false. These neurons therefore reliably changed their activities in relation to the otherâs beliefs despite variations in the specific statements and scenarios within each trial type, providing evidence for the specific tuning of human neurons to ToM computations. | 2309.01660#11 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 12 | To investigate whether the artificial modelsâ theory of mind capability shared similar mechanisms as in the human brain, we performed element-wise analysis using the following procedures: Firstly, to obtain the activities of âartificial neuronsâ in LLMs, we used hidden embeddings (output of transformer modules) from all layers as well as the input to the first transformer module. Thus, for example, instead of using the firing rate values for each neuron to determine their response selectivity to false versus true beliefs, we used the embedding values for each node in the network (Methods). Secondly, to establish a meaningful comparison with human neurons, we employed ToM task materials for LLM closely aligned to the one we tested on humans. Here, we used the same statements as in model evaluation, with trials grouped into pairs of true and false belief trials, and asked a belief question following the statement (Fig. 2A, Table 1, Method). These questions were exactly the same for each pair, but the answer depended on the information in the statements which defined the trial types. We modified the statements so that each true-false-belief pair contained similar number of words to minimize any effect caused by variations of word counts. Finally, | 2309.01660#12 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 13 | trial types. We modified the statements so that each true-false-belief pair contained similar number of words to minimize any effect caused by variations of word counts. Finally, we input the model with the concatenation of the statement and the question as one batch and only examined the embeddings from the tokens within the questions (detailed explanation in Method). We then examined whether embeddings showed significant differences in values between true- and false-belief trials using a Mann Whitney U test. Thus, if an embedding encoded no ToM attributes and solely reflected the literal wording information (which was very similar within each pair) or had no memory of the statements, it would result in similar values between the pair of the trials. Together, the LLM modelâs hidden embeddings can be thought of, in turn, as the activities of artificial neurons across all network layers that vary in relation to the task and trial-aligned input. | 2309.01660#13 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 14 | Using this approach, we indeed observed the presence of embeddings with significant responses corresponding to the different trial types. The percentage of the modulated embeddings varied across models and the layers (Fig. 2B-D). For example, in the Falcon 40b model, we found 6.3% of significant embeddings in layer 25, which represented the highest percentage among the
layers. These embeddings showed either increased or decreased activities for true- versus false- belief trials (Fig. 2B). By contrast, there was no responsive embedding from the input layer up to the layer 8 in this model (Fig. 2D left, right inset). A similar pattern was observed in the LLaMa- 30b models (Fig. 2D left, middle inset), in which 5.6% of embeddings at 19th layer exhibited selectivity to trial types, and very few were responsive from the input up to the 9th layer. This trend of significant artificial neurons present in the middle and high layers was consistent across models. | 2309.01660#14 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 15 | Next, we assessed the percentage of embeddings displaying significant selectivity from various models by using the percentage from the layer with the highest percentage of each model. In general, the percentage of significant embeddings increased with the model size (Fig. 2D left). For large models ( 12b), there was an average of 3.9% of embeddings responding to ToM tasks, and this percentage dropped to 0.6% for smaller models (T test, statistics = -4.6, p = 4 x 10-4). Collectively, the percentage of significant embeddings were also closely correlated to the model performance (Fig. 2D right). For models with above-chance performance, the percentage of ToM-responsive embeddings increased non-linearly, with an exponential relation between the percentage and the performance (percentage = ð exp (ð â performance) , where ð = 0.01 2.1 x 10-5, ð = 6.1 4.4). Together, our findings revealed the presence of embeddings that displayed modulations related to the theory of mind content in multiple large models, a feature that was absent in smaller models with chance-level false-belief performance. | 2309.01660#15 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 16 | Finally, to ensure the above findings cannot be explained by random fluctuation or other features unrelated to the ToM information in the statements, we conducted a control experiment by randomly permuting words in the statements. We then applied the same criterion to select responding embeddings. We found that the percentages were significantly lower compared to those resulted from the intact statements for large models (T-test, statistic = 4.1, p = 0.002) but not for small models (T-test, statistic = 1.46, p = 0.16). These, together, indicated that the presence of ToM responsive neurons in the large models cannot be explained by clues unrelated to the contextual information in the scenario statements. Therefore, although the percentage of ToM artificial neurons were considerably lower than those observed in the human brain (23%), there was an emergence of âartificialâ neurons in middle and high layers of the large LLMs that responded to ToM features.
# True and false beliefs can be decoded from the entire embeddings | 2309.01660#16 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 17 | # True and false beliefs can be decoded from the entire embeddings
Next, to further investigate the relationships between the hidden embeddings and the modelsâ ToM capability, we examined whether othersâ beliefs (i.e., true vs false beliefs) can be directly decoded from the population of hidden embeddings. Specifically, we used all dimensions of embeddings derived from each layer within a given model, and trained a logistic regression with L2 regularization to predict the trial types for trials that were not in the training dataset (details in Methods). Here, we find a majority of the true- and false-belief trial types were accurately decoded using the entire hidden embeddings from the 25th layer of the Falcon 40b model (Fig. 3A top). Furthermore, the activities of significant neurons exhibited far greater discrimination between false and true belief trials in correctly decoded trials compared to incorrectly decoded trials (average z-scored differences were 0.60 and 0.25, respectively; T-test, statistic = 17.9, p =
1.6 x 10-62, Fig. 3A bottom). Together, the activities of these artificial neurons therefore appeared to be predictive of the modelâs ToM performance. | 2309.01660#17 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 18 | 1.6 x 10-62, Fig. 3A bottom). Together, the activities of these artificial neurons therefore appeared to be predictive of the modelâs ToM performance.
Examining all models together, the decoding accuracies increased with the size of the models, with large models ( 12b) showing an average of 75% decoding accuracy. The Falcon-40b model showed the highest decoding accuracy of 81%. The embeddings in smaller models ( 7b), however, could only predict the trial types at an average accuracy of 67%, which was significantly lower than those from the large models (T-test, statistic = -4.2, p = 0.001). This observation was also consistent with the ratio of responding neurons, together suggesting a relation between the size of the models and the proportion of artificial neurons capable of accurately predicting the otherâs beliefs. | 2309.01660#18 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 19 | Finally, to ensure that the decoding accuracies were not originated from factors unrelated to the scenario statements, we randomly permuted the words in each pair of the statements and repeated the same decoding procedures to decode the trial type (Methods). Here, the decoding accuracies from all models dropped to an average of only 55%, which was significantly lower than all accuracies without the random permutation (T-test, p < 3 x 10-110). The differences of accuracies between the intact and permuted control were higher for large models, with an average of 19%. These findings showed that the ToM trial types can be robustly decoded from the population of artificial neurons (embeddings), indicating a consistent encoding of ToM features by the embeddings. Together with the results from individual embedding, our results collectively support the hypothesis that hidden embeddings possess the capacity to effectively predict the otherâs beliefs, suggesting their role in facilitating the modelsâ ToM performance.
# Discussion | 2309.01660#19 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 20 | The ability to discern between true and false beliefs represents a significant aspect of theory of mind that is proposed to be linked to our conscious mind (27, 28). Recent advancements in large language models (LLMs) have revealed their potentials in distinguishing objective reality from false beliefs (10, 12). Our study aims to provide an initial investigation to the possible mechanisms underlying ToM in LLMs. By analyzing hidden embeddings from various open- source LLMs, we uncovered the presence of hidden embeddings that were predictive of the beliefs of others across richly varied scenarios. This finding is particularly remarkable, considering that the embeddings were derived from identical questions following narratives with very similar wording. This suggests the models' ability to not only differentiate subtle variations among closely related sentences, but also categorize them based on true and false beliefs, thereby encoding the perspective of others. These responses were absent when we randomly permuted the words in statements while keeping the questions intact. Additionally, the trial types (i.e., true- or false-belief) were accurately decoded from the population of embeddings, further validating the robust representation of | 2309.01660#20 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 21 | the trial types (i.e., true- or false-belief) were accurately decoded from the population of embeddings, further validating the robust representation of ToM within the artificial models. Finally, we observed a strong and positive relation between the task performance and the proportion of ToM-responsive embeddings, suggesting their role in facilitating the performance. Collectively, our findings indicate an emergence of ToM-related embeddings in the artificial models, supporting the model capability in capturing essential aspects of ToM. | 2309.01660#21 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 22 | Although, unlike humans, LLMs were trained solely on language materials and lacked rich resources by which humans develop ToM capability (29, 30), the emergent behavior observed in the artificial models bears a striking resemblance to the neuronal activity associated with ToM in the human brain. With hidden embeddings as counterparts of brain neurons, both systems contain neurons that directly respond to the perspective of others. We showed that a substantial proportion of artificial neurons that responded selectively to true- or false-belief trials, mirroring prefrontal neurons in humans exhibiting changes in firing rates for different trial types (9). Furthermore, the LLM layers with high percentages of ToM-responding embeddings were consistently not confined to one or two layers or distributed randomly. Rather, they showed a peak in the middle and high layers and almost zero in the input layers. A similar distributed areas for ToM were observed in human brain, particularly within areas of the frontal, temporal and parietal cortices (9, 17-20), which have been identified as regions for high-level cognitive processing. ToM-related activity within lower input processing areas such as occipital lobe is | 2309.01660#22 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 23 | 17-20), which have been identified as regions for high-level cognitive processing. ToM-related activity within lower input processing areas such as occipital lobe is minimal. Finally, we observed the artificial layers exhibiting ToM responses were located in contiguous layers, analogous to the highly interconnected structure of ToM brain areas. Altogether, these observations are remarkable because humans rely on many years of development and real-world social interactions with others to form ToM capability (29, 30). The LLMs tested here, by comparison, are largely trained on vast language corpora with no explicit experience in interacting with others or direct representation of agency. Yet, despite significant structural and algorithmic differences between the artificial and brain networks, they indeed exhibit surprising convergence by adopting similar mechanism of encoding ToM information. This convergence is evident both in their capability to differentiate true and false beliefs and in the emergence of ToM-related neurons that facilitate such cognitive functions. | 2309.01660#23 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 24 | Collectively, these results shed light on the potential of large language models to exhibit theory of mind capabilities and contribute to our understanding of cognitive processes in artificial intelligence. However, our findings are limited to open-source LLMs, as we did not have access to the hidden embeddings of the higher-performing LLMs such as GPT-4 (7), which could offer further insights into the relationship between model performance and embedding representation. Further, our methods excluded embeddings that were selective to both true- and false-belief trials and only focused on the embeddings that showed selectivity to one of them. Nevertheless, our findings represent the initial exploration into the role of embeddings in ToM within language models and provide insights in how artificial intelligence can exhibit sophisticated cognitive abilities.
# Methods
# Theory of mind (ToM) materials
To assess the artificial language modelsâ capacity for theory of mind and to ensure a direct comparison with human performance, we used testing materials previously employed in human studies during single neural recordings. Minor adjustments were made to accommodate the specificities of artificial models (e.g., statements in pairs were slightly modified to have similar lengths). The ToM ability of each model was evaluated using 76 trials consisting of a scenario statement followed by two related questions: a âbelief questionâ related to the belief of the agent | 2309.01660#24 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 25 | in the scenario statement and a âfact questionâ concerning the physical state of reality (Fig. 1, Table 1). Across all trials we presented, the lengths of the statement varied between 81 words to 191 words, with an average of 125 words.
Scenario statements. The trials were grouped in pairs, containing one true-belief and one false- belief trial in each pair. The trials in a pair start with very similar scenario statements providing background for the reader to infer whether the agentâs belief in the story is aligned with the reality or not (true-belief or false belief, respectively; see examples in Table 1). In addition, we ensured each pair of true- and false-belief trials contain the same number of words in the statements, so that the potential variances stemming from different word positions in the sentence are minimized. | 2309.01660#25 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 26 | Questions for evaluating model performance. Based on the statements described above, we designed two categories of questions to test the ToM capability of the large language models (LLMs): a fact question and an other-belief question (Table 1). We edited the structure of the question in order to obtain an objective evaluation of the model ability. For example, after a scenario statement like âCharles left his wallet on the counter as he was leaving the store. The wallet fell on the floor. Charles returnsâ, if we asked âWhere will Charles look for the wallet?â, an LLM might generate a long paragraph without directly answering the question, making it subjective to assess whether the model answered the question correctly or not. Here, given that all LLMs we assessed generate outputs in the form of predicted upcoming words with a probability distribution across all possible tokens, we modified the questions to align with this characteristic of the LLMs. In the example provided above, we asked âCharles will look for the wallet on theâ. In this way, LLM models will likely predict a location for the upcoming word. | 2309.01660#26 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 27 | Question for evaluating othersâ belief processing by hidden embeddings. Here, the goal of these questions is not to evaluate model performance, but to examine whether hidden embeddings show selectivity to the trial types (false-belief or true-belief), and to directly compare the results to those from single neurons in human brains. Therefore, we used the same set of questions as those posed to human participants to ensure reasonable comparison with findings from single neurons recorded in prefrontal cortex of human brains. Specifically, we asked the same belief questions for each pair of true- and false-belief trials, using the same format in (9), e.g., âWhere will Charles look for his wallet?â In this way, the pair of true- and false-belief trials were composed with very similar words and with exactly the same questions (Table 1, Fig. 2).
Table 1. Example of the task materials
Trial type Statement Fact question Belief question Belief question in the human study False belief True belief Mary put fish inside a jewelry box while her son wasn't looking. Her son opens the box. Mary put jewelry inside a jewelry box and her son sees it. Her son opens the box. Inside the box, there is Inside the box, there is Inside the box, he expects to find Inside the box, he expects to find What does he expect to find? What does he expect to find? | 2309.01660#27 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 28 | False belief Ned and you take a photo of an apple on a tree. While the photo develops, Ned leaves and is unaware that a wind blows the apple to ground. Currently, the apple is on the Ned believes that the apple is on the Where does Ned believe the apple is? True belief False belief True belief Ned and you take a photo of an apple on a tree. While the photo develops, you and Ned see a strong wind blow the apple on the ground. Charles left his wallet on the counter as he was leaving the store. The wallet fell on the floor. Charles returns Charles left his wallet on the counter as he was leaving the store. No one has touched his wallet. Charles returns. Currently, the apple is on the The wallet is on the The wallet is on the Ned believes that the apple is on the Charles will look for the wallet on the Charles will look for the wallet on the Where does Ned believe the apple is? Where will Charles look for the wallet? Where will Charles look for the wallet?
# Control tasks | 2309.01660#28 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 29 | # Control tasks
To ensure our observations are not derived from factors unrelated to the scenario created in the statements, we performed the following two controls. First, we created shuffled control trials by randomly permutating words in each statement while keeping the questions intact (Table 2). In this way, we kept the same words in the statement but eliminated the contextual information. Second, we estimated the impact of any clues within the questions (e.g., the potential imbalance of word frequency) by inputting each model with the questions only. The combination of these two controls will provide estimation on the impact of factors unrelated to the ToM-related content provided by the statement.
Table 2. Example of control task by random shuffle words in statement | 2309.01660#29 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 30 | Table 2. Example of control task by random shuffle words in statement
Trial type Statement Fact question Belief question Belief question in the human study False belief her son jewelry Mary looking. Her fish son put while box inside wasn't opens the box. a Inside the box, there is Inside the box, he expects to find What does he expect to find? True belief inside Her and the box it. Mary her box. jewelry a opens son put jewelry sees son Inside the box, there is Inside the box, he expects to find What does he expect to find? False belief and take the photo the a wind an Ned Ned leaves tree. apple on is unaware a photo blows and develops, ground. While of you apple a to that Currently, the apple is on the Ned believes that the apple is on the Where does Ned believe the apple is? True belief While on you develops, the on you the Ned apple blow the apple an tree. Ned and take and photo a ground. strong a wind of see a photo Currently, the apple is on the Ned believes that the apple is on the Where does Ned believe the apple is? False belief on store. his left as the counter leaving was The wallet returns on the Charles wallet floor. fell Charles the he The wallet is on the Charles will look for the wallet on the Where will Charles look for the wallet? | 2309.01660#30 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 31 | has No his one counter store. the returns. as on wallet wallet. Charles Charles the was he leaving touched his left The wallet is on the Charles will look for the wallet on the Where will Charles look for the wallet?
# Large language models (LLMs)
Our study primarily focuses on four high-performing, independently trained language models that are publicly available as open source. All LLM models examined were composed of transformer modules that were connected in sequence. Each LLM contains multiple versions, characterized by varying numbers of parameters and potential fine-tunning on specific datasets and training. Specifically, these models include Falcon (1b, 7b, 40b), llama (3b, 7b, 13b, 30b, 33b), Pythia (3b, 7b, 12b), and GPT-2 (medium, large, xl). The tokenizers and parameters from all models were downloaded in July 2023 and were not updated since then. The details of the model information and the datasets that they were fine-tuned on are listed in Table 3. In our study, all models and tokenizers were loaded via Huggingface in Python (31). For models with a parameter count of less than or equal to 7b, we utilize a desktop computer with single GPU (NVIDIA GeForce RTX 4090). For larger models, we utilize the Massachusetts General Hospital GPU cluster facility with up to eight GPUs (NVIDIA DGX-1) for model performance and evaluations. | 2309.01660#31 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
2309.01660 | 33 | Model name Model source Size Description from model developer Falcon-1b Falcon Falcon-7b Falcon Falcon-40b Falcon LLaMa-3b-1 LLaMa LLaMa-7b-1 LLaMa LLaMa-13b-1 LLaMa LLaMa-30b-1 LLaMa LLaMa-7b-2 LLaMa LLaMa-13b-3 LLaMa LLaMa-33b-4 LLaMa Pythia-3b Pythia tiiuae/falcon- rw-1b tiiuae/falcon-7b 1b 7b 40b Decoder model; Trained on 350B tokens of RefinedWeb (22) Decoder model; Trained on 1,500B tokens of RefinedWeb; Enhanced with curated corpora. Decoder model; Based on Falcon-40B; Finetuned on a mixture of Baize. 3b An Open Reproduction of LLaMA (32) 7b An Open Reproduction of LLaMA 13b Merge of LLAMA-13b and SuperCOT LoRA (33) 30b Supercot; Work with langchain prompting 7b Chatbot; Fine-tuned on user-shared conversations from ShareGPT (34) 13b Fine-tuned on | 2309.01660#33 | Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain | With their recent development, large language models (LLMs) have been found
to exhibit a certain level of Theory of Mind (ToM), a complex cognitive
capacity that is related to our conscious mind and that allows us to infer
another's beliefs and perspective. While human ToM capabilities are believed to
derive from the neural activity of a broadly interconnected brain network,
including that of dorsal medial prefrontal cortex (dmPFC) neurons, the precise
processes underlying LLM's capacity for ToM or their similarities with that of
humans remains largely unknown. In this study, we drew inspiration from the
dmPFC neurons subserving human ToM and employed a similar methodology to
examine whether LLMs exhibit comparable characteristics. Surprisingly, our
analysis revealed a striking resemblance between the two, as hidden embeddings
(artificial neurons) within LLMs started to exhibit significant responsiveness
to either true- or false-belief trials, suggesting their ability to represent
another's perspective. These artificial embedding responses were closely
correlated with the LLMs' performance during the ToM tasks, a property that was
dependent on the size of the models. Further, the other's beliefs could be
accurately decoded using the entire embeddings, indicating the presence of the
embeddings' ToM capability at the population level. Together, our findings
revealed an emergent property of LLMs' embeddings that modified their
activities in response to ToM features, offering initial evidence of a parallel
between the artificial model and neurons in the human brain. | http://arxiv.org/pdf/2309.01660 | Mohsen Jamali, Ziv M. Williams, Jing Cai | cs.CL, cs.AI | null | null | cs.CL | 20230904 | 20230904 | [
{
"id": "2302.02083"
},
{
"id": "2304.02015"
},
{
"id": "2307.07526"
},
{
"id": "2305.10601"
},
{
"id": "2304.09102"
},
{
"id": "2302.08399"
},
{
"id": "1910.03771"
},
{
"id": "2306.01116"
},
{
"id": "2305.12295"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.