doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.12033 | 42 | P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; et al. 2023. Self-refine: Iterative refinement with self- feedback. arXiv preprint arXiv:2303.17651. Mollas, I.; Chrysopoulou, Z.; Karlos, S.; and Tsoumakas, G. 2020. Ethos: an online hate speech detection dataset. arXiv preprint arXiv:2006.08328. OpenAI. 2023. GPT-4 Technical Report. Technical Report arXiv:2303.08774, OpenAI. Opitz, D.; and Shavlik, J. 1995. Generating accurate and diverse members of a neural-network ensemble. Advances in neural information processing systems, 8. | 2308.12033#42 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | As an effective tool for eliciting the power of Large Language Models (LLMs),
prompting has recently demonstrated unprecedented abilities across a variety of
complex tasks. To further improve the performance, prompt ensemble has
attracted substantial interest for tackling the hallucination and instability
of LLMs. However, existing methods usually adopt a two-stage paradigm, which
requires a pre-prepared set of prompts with substantial manual effort, and is
unable to perform directed optimization for different weak learners. In this
paper, we propose a simple, universal, and automatic method named PREFER (Pompt
Ensemble learning via Feedback-Reflect-Refine) to address the stated
limitations. Specifically, given the fact that weak learners are supposed to
focus on hard examples during boosting, PREFER builds a feedback mechanism for
reflecting on the inadequacies of existing weak learners. Based on this, the
LLM is required to automatically synthesize new prompts for iterative
refinement. Moreover, to enhance stability of the prompt effect evaluation, we
propose a novel prompt bagging method involving forward and backward thinking,
which is superior to majority voting and is beneficial for both feedback and
weight calculation in boosting. Extensive experiments demonstrate that our
PREFER achieves state-of-the-art performance in multiple types of tasks by a
significant margin. We have made our code publicly available. | http://arxiv.org/pdf/2308.12033 | Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, Mingchen Cai | cs.CL, cs.AI | 8 pages, 4 figures | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2305.03495"
},
{
"id": "2212.09597"
},
{
"id": "2206.07682"
},
{
"id": "2212.10559"
},
{
"id": "1705.00648"
},
{
"id": "1606.05250"
},
{
"id": "2306.16564"
},
{
"id": "2106.09685"
},
{
"id": "2006.08328"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "1704.05426"
},
{
"id": "2303.17651"
},
{
"id": "2304.05970"
},
{
"id": "1508.05326"
},
{
"id": "2203.14465"
},
{
"id": "2303.08774"
},
{
"id": "2103.10385"
}
] |
2308.12284 | 42 | [18] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
[19] Danny Hernandez, Tom B. Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El- Showk, Nelson Elhage, Zac Hatfield-Dodds, T. J. Henighan, Tristan Hume, Scott Johnston, Benjamin Mann, Christopher Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish. Scaling laws and interpretability of learning from repeated data. ArXiv, abs/2205.10487, 2022.
[20] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and L. Sifre. Training compute-optimal large language models. ArXiv, abs/2203.15556, 2022. | 2308.12284#42 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12033 | 43 | Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730â27744. Pitis, S.; Zhang, M. R.; Wang, A.; and Ba, J. 2023. Boosted arXiv Prompt Ensembles for Large Language Models. preprint arXiv:2304.05970. Pryzant, R.; Iter, D.; Li, J.; Lee, Y. T.; Zhu, C.; and Zeng, M. 2023. Automatic prompt optimization withâ gradient de- scentâ and beam search. arXiv preprint arXiv:2305.03495. Qiao, S.; Ou, Y.; Zhang, N.; Chen, X.; Yao, Y.; Deng, S.; Tan, C.; Huang, F.; and Chen, H. 2022. Reasoning with language model prompting: A survey. arXiv preprint arXiv:2212.09597. Radford, A.; | 2308.12033#43 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | As an effective tool for eliciting the power of Large Language Models (LLMs),
prompting has recently demonstrated unprecedented abilities across a variety of
complex tasks. To further improve the performance, prompt ensemble has
attracted substantial interest for tackling the hallucination and instability
of LLMs. However, existing methods usually adopt a two-stage paradigm, which
requires a pre-prepared set of prompts with substantial manual effort, and is
unable to perform directed optimization for different weak learners. In this
paper, we propose a simple, universal, and automatic method named PREFER (Pompt
Ensemble learning via Feedback-Reflect-Refine) to address the stated
limitations. Specifically, given the fact that weak learners are supposed to
focus on hard examples during boosting, PREFER builds a feedback mechanism for
reflecting on the inadequacies of existing weak learners. Based on this, the
LLM is required to automatically synthesize new prompts for iterative
refinement. Moreover, to enhance stability of the prompt effect evaluation, we
propose a novel prompt bagging method involving forward and backward thinking,
which is superior to majority voting and is beneficial for both feedback and
weight calculation in boosting. Extensive experiments demonstrate that our
PREFER achieves state-of-the-art performance in multiple types of tasks by a
significant margin. We have made our code publicly available. | http://arxiv.org/pdf/2308.12033 | Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, Mingchen Cai | cs.CL, cs.AI | 8 pages, 4 figures | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2305.03495"
},
{
"id": "2212.09597"
},
{
"id": "2206.07682"
},
{
"id": "2212.10559"
},
{
"id": "1705.00648"
},
{
"id": "1606.05250"
},
{
"id": "2306.16564"
},
{
"id": "2106.09685"
},
{
"id": "2006.08328"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "1704.05426"
},
{
"id": "2303.17651"
},
{
"id": "2304.05970"
},
{
"id": "1508.05326"
},
{
"id": "2203.14465"
},
{
"id": "2303.08774"
},
{
"id": "2103.10385"
}
] |
2308.12284 | 43 | [21] Srinivas Iyer, Xiaojuan Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian OâHoro, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Veselin Stoyanov. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. ArXiv, abs/2212.12017, 2022.
[22] Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021.
[23] Angela H Jiang, Daniel L-K Wong, Giulio Zhou, David G Andersen, Jeffrey Dean, Gregory R Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C Lipton, et al. Accelerating deep learning by focusing on the biggest losers. arXiv preprint arXiv:1910.00762, 2019. | 2308.12284#43 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12033 | 44 | H. 2022. Reasoning with language model prompting: A survey. arXiv preprint arXiv:2212.09597. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I.; et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8): 9. Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Wang, W. Y. 2017. âliar, liar pants on fireâ: A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648. Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.; Chi, E.; Narang, S.; Chowdhery, A.; and Zhou, D. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Wei, J.; Tay, Y.; Bommasani, R.; | 2308.12033#44 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | As an effective tool for eliciting the power of Large Language Models (LLMs),
prompting has recently demonstrated unprecedented abilities across a variety of
complex tasks. To further improve the performance, prompt ensemble has
attracted substantial interest for tackling the hallucination and instability
of LLMs. However, existing methods usually adopt a two-stage paradigm, which
requires a pre-prepared set of prompts with substantial manual effort, and is
unable to perform directed optimization for different weak learners. In this
paper, we propose a simple, universal, and automatic method named PREFER (Pompt
Ensemble learning via Feedback-Reflect-Refine) to address the stated
limitations. Specifically, given the fact that weak learners are supposed to
focus on hard examples during boosting, PREFER builds a feedback mechanism for
reflecting on the inadequacies of existing weak learners. Based on this, the
LLM is required to automatically synthesize new prompts for iterative
refinement. Moreover, to enhance stability of the prompt effect evaluation, we
propose a novel prompt bagging method involving forward and backward thinking,
which is superior to majority voting and is beneficial for both feedback and
weight calculation in boosting. Extensive experiments demonstrate that our
PREFER achieves state-of-the-art performance in multiple types of tasks by a
significant margin. We have made our code publicly available. | http://arxiv.org/pdf/2308.12033 | Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, Mingchen Cai | cs.CL, cs.AI | 8 pages, 4 figures | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2305.03495"
},
{
"id": "2212.09597"
},
{
"id": "2206.07682"
},
{
"id": "2212.10559"
},
{
"id": "1705.00648"
},
{
"id": "1606.05250"
},
{
"id": "2306.16564"
},
{
"id": "2106.09685"
},
{
"id": "2006.08328"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "1704.05426"
},
{
"id": "2303.17651"
},
{
"id": "2304.05970"
},
{
"id": "1508.05326"
},
{
"id": "2203.14465"
},
{
"id": "2303.08774"
},
{
"id": "2103.10385"
}
] |
2308.12284 | 44 | [24] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535â547, 2019.
[25] Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. Scaling laws for neural language models. ArXiv, abs/2001.08361, 2020.
[26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[27] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models better. In Annual Meeting of the Association for Computational Linguistics, 2021.
[28] Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning, 2012. | 2308.12284#44 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12033 | 45 | of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. 2022a. Emergent abilities of large language mod- els. arXiv preprint arXiv:2206.07682. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022b. Chain-of- thought prompting elicits reasoning in large language mod- els. Advances in Neural Information Processing Systems, 35: 24824â24837. Williams, A.; Nangia, N.; and Bowman, S. R. 2017. A broad-coverage challenge corpus for sentence understand- ing through inference. arXiv preprint arXiv:1704.05426. Zelikman, E.; Mu, J.; Goodman, N. D.; and Wu, Y. T. 2022. Star: Self-taught reasoner bootstrapping | 2308.12033#45 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | As an effective tool for eliciting the power of Large Language Models (LLMs),
prompting has recently demonstrated unprecedented abilities across a variety of
complex tasks. To further improve the performance, prompt ensemble has
attracted substantial interest for tackling the hallucination and instability
of LLMs. However, existing methods usually adopt a two-stage paradigm, which
requires a pre-prepared set of prompts with substantial manual effort, and is
unable to perform directed optimization for different weak learners. In this
paper, we propose a simple, universal, and automatic method named PREFER (Pompt
Ensemble learning via Feedback-Reflect-Refine) to address the stated
limitations. Specifically, given the fact that weak learners are supposed to
focus on hard examples during boosting, PREFER builds a feedback mechanism for
reflecting on the inadequacies of existing weak learners. Based on this, the
LLM is required to automatically synthesize new prompts for iterative
refinement. Moreover, to enhance stability of the prompt effect evaluation, we
propose a novel prompt bagging method involving forward and backward thinking,
which is superior to majority voting and is beneficial for both feedback and
weight calculation in boosting. Extensive experiments demonstrate that our
PREFER achieves state-of-the-art performance in multiple types of tasks by a
significant margin. We have made our code publicly available. | http://arxiv.org/pdf/2308.12033 | Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, Mingchen Cai | cs.CL, cs.AI | 8 pages, 4 figures | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2305.03495"
},
{
"id": "2212.09597"
},
{
"id": "2206.07682"
},
{
"id": "2212.10559"
},
{
"id": "1705.00648"
},
{
"id": "1606.05250"
},
{
"id": "2306.16564"
},
{
"id": "2106.09685"
},
{
"id": "2006.08328"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "1704.05426"
},
{
"id": "2303.17651"
},
{
"id": "2304.05970"
},
{
"id": "1508.05326"
},
{
"id": "2203.14465"
},
{
"id": "2303.08774"
},
{
"id": "2103.10385"
}
] |
2308.12284 | 45 | [29] Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivas- tava, Ce Zhang, Yuandong Tian, Christopher Re, et al. Deja vu: Contextual sparsity for efficient llms at inference time, 2023.
11
[30] S. Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David M. Mimno, and Daphne Ippolito. A pretrainerâs guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. ArXiv, abs/2305.13169, 2023.
[31] Kristof Meding, Luca M Schulze Buschoff, Robert Geirhos, and Felix A Wichmann. Trivial or impossibleâdichotomous data difficulty masks model differences (on imagenet and beyond). arXiv preprint arXiv:2110.05922, 2021.
[32] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. | 2308.12284#45 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12033 | 46 | Zelikman, E.; Mu, J.; Goodman, N. D.; and Wu, Y. T. 2022. Star: Self-taught reasoner bootstrapping reasoning with rea- soning. arXiv preprint arXiv:2203.14465. Zhao, T.; Wei, M.; Preston, J. S.; and Poon, H. 2023a. Auto- matic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision. arXiv preprint arXiv:2306.16564. Zhao, W. X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al. 2023b. arXiv preprint A survey of large language models. arXiv:2303.18223. Zhao, Z.; Wallace, E.; Feng, S.; Klein, D.; and Singh, S. 2021. Calibrate before use: Improving few-shot perfor- mance of language models. In International Conference on Machine Learning, 12697â12706. PMLR. | 2308.12033#46 | PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine | As an effective tool for eliciting the power of Large Language Models (LLMs),
prompting has recently demonstrated unprecedented abilities across a variety of
complex tasks. To further improve the performance, prompt ensemble has
attracted substantial interest for tackling the hallucination and instability
of LLMs. However, existing methods usually adopt a two-stage paradigm, which
requires a pre-prepared set of prompts with substantial manual effort, and is
unable to perform directed optimization for different weak learners. In this
paper, we propose a simple, universal, and automatic method named PREFER (Pompt
Ensemble learning via Feedback-Reflect-Refine) to address the stated
limitations. Specifically, given the fact that weak learners are supposed to
focus on hard examples during boosting, PREFER builds a feedback mechanism for
reflecting on the inadequacies of existing weak learners. Based on this, the
LLM is required to automatically synthesize new prompts for iterative
refinement. Moreover, to enhance stability of the prompt effect evaluation, we
propose a novel prompt bagging method involving forward and backward thinking,
which is superior to majority voting and is beneficial for both feedback and
weight calculation in boosting. Extensive experiments demonstrate that our
PREFER achieves state-of-the-art performance in multiple types of tasks by a
significant margin. We have made our code publicly available. | http://arxiv.org/pdf/2308.12033 | Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, Mingchen Cai | cs.CL, cs.AI | 8 pages, 4 figures | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2305.03495"
},
{
"id": "2212.09597"
},
{
"id": "2206.07682"
},
{
"id": "2212.10559"
},
{
"id": "1705.00648"
},
{
"id": "1606.05250"
},
{
"id": "2306.16564"
},
{
"id": "2106.09685"
},
{
"id": "2006.08328"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "1704.05426"
},
{
"id": "2303.17651"
},
{
"id": "2304.05970"
},
{
"id": "1508.05326"
},
{
"id": "2203.14465"
},
{
"id": "2303.08774"
},
{
"id": "2103.10385"
}
] |
2308.12284 | 46 | [33] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018.
[34] Sören Mindermann, Jan M Brauner, Muhammed T Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Höltgen, Aidan N Gomez, Adrien Morisot, Sebastian Farquhar, et al. Pri- oritized training on points that are learnable, worth learning, and not yet learnt. In International Conference on Machine Learning, pages 15630â15649. PMLR, 2022.
[35] Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. In International Conference on Machine Learning, pages 6950â6960. PMLR, 2020.
[36] Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696, 2016. | 2308.12284#46 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 47 | [37] Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language models. 2023.
[38] Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. Advances in Neural Information Processing Systems, 34:20596â20607, 2021.
[39] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data, and web data only.
[40] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. | 2308.12284#47 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 48 | [41] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â5551, 2020.
[42] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â106, 2021.
[43] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage? arXiv preprint arXiv:2304.15004, 2023.
[44] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017. | 2308.12284#48 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 49 | [45] Mohammad Shoeybi, M Patwary, R Puri, P LeGresley, J Casper, B Megatron-LM Catanzaro, et al. Training multi-billion parameter language models using model parallelism. arXiv preprint cs.CL/1909.08053, 2019.
[46] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022.
[47] Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. ArXiv, abs/2206.14486, 2022.
12 | 2308.12284#49 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 50 | 12
[48] Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems, 35:38274â38290, 2022.
[49] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018.
[50] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurâelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971, 2023.
[51] Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, and Anson Ho. Will we run out of data? an analysis of the limits of scaling datasets in machine learning, 2022. | 2308.12284#50 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 51 | [52] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019. [53] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, M. Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddharth | 2308.12284#51 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 52 | Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddharth Deepak Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hanna Hajishirzi, and Daniel Khashabi. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. In Conference on Empirical Methods in Natural Language Processing, 2022. | 2308.12284#52 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 53 | [54] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmâan, Armand Joulin, and Edouard Grave. Ccnet: Extracting high quality monolingual datasets from web crawl data. ArXiv, abs/1911.00359, 2019.
[55] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up language model pretraining. ArXiv, abs/2305.10429, 2023.
[56] Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy Liang. Data selection for language models via importance resampling. ArXiv, abs/2302.03169, 2023. | 2308.12284#53 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 54 | [57] Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, and Yang You. To repeat or not to repeat: Insights from scaling llm under token-crisis. arXiv preprint arXiv:2305.13230, 2023. [58] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a
machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
[59] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068, 2022.
[60] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. Dataset condensation with gradient matching. arXiv preprint arXiv:2006.05929, 2020. | 2308.12284#54 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 55 | [61] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19â27, 2015.
13
# A Appendix
# A.1 Experimental Setup Details
# A.1.1 Hyperparameters for model training | 2308.12284#55 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 56 | 13
# A Appendix
# A.1 Experimental Setup Details
# A.1.1 Hyperparameters for model training
As mentioned in Section 3.4, we use the same hyperparameters and configurations as the original OPT model architecture from Zhang et al. [59]. We describe these hyperparameters briefly in Table A1. We chose these configurations because they are openly available and have been used as the standard in many previous works [1, 13, 29, 48, 59]. All models use GELU activation [18], Adam optimizer [26] with β1 = 0.9, β2 = 0.95, ϵ = 10â8, weight decay set to 0.1, and we clip gradient norms at 1.0. We use a polynomial learning rate schedule, where learning rate warms up from 0.0 to peak learning rate over the first 375 million tokens, and is then annealed to (0.1 * Peak LR) over the remaining (Ttarget â 375) M tokens. We train all our models in fully sharded data parallel mode [2] using Megatron-LM Tensor Parallelism [45] with fp16 precision. For reproducibility (and perhaps the only difference from the original configuration in Zhang et al. [59]) is that we do not use dropout. | 2308.12284#56 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 57 | Table A1: Model architecture details. Most of the parameter configurations are the same as in Table 1 of Zhang et al. [59]. Batch size denotes the total tokens that the model sees during one gradient descent update.
8M 125M 1.3B 6.7B 4 12 24 32 2 12 32 32 128 768 2048 4096 1.0e-3 6.0e-4 2.0e-4 1.2e-4 0.5M 0.5M 1M 2M
# A.1.2 Dataset Curation Details
In this subsection, we describe how we curate CC-dedup, the starting source dataset used throughout the paper. We start with 5 CommonCrawl dumps 2 which range from 2017 to 2020. We then use CC-net [54], to de-duplicate data at the paragraph level, remove non-English web pages, and filter out low-quality pages. The pipeline we use is identical to the pipeline used in Touvron et al. [50] (see the section after the subtitle "English CommonCrawl [67%]", within Section 2). | 2308.12284#57 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 58 | On top of this, we add an additional step of MinHash [8] de-duplication at the document-level. The parameters for MinHash are 20 hashes per signature, 20 buckets, and 1 row per bucket. These parameters are the default parameters in the spark implementation of MinHashLSH, and we did not do a hyperparameter sweep on these parameters due to compute limitations. Previous work has attempted running MinHash with much more aggressive parameters: Lee et al. [27] and Penedo et al. [39] use 20 buckets, 450 hashes per bucket, and 9000 signatures per hash. We conjecture that more aggressive MinHash would remove more templates, resulting in a higher-quality starting dataset, potentially making the SemDeDup step of D4 less necessary. Abbas et al. [1] did find that the performance of MinHash from Lee et al. [27] and SemDeDup are comparable at a fixed data selection ratio of 3.9% on C4, indicating that SemDeDup filters out similar data to aggressive MinHash does. We leave sweeping over these hyperparameters as future work. | 2308.12284#58 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 59 | We note that since our dataset is curated from CommonCrawl dumps, there is risk that our training set contains offensive or PII content. We note, however, that this risk is no more than that of standard language modeling curation such as Touvron et al. [50], since we use the same pipeline to filter CommonCrawl dumps.
# A.1.3 Parameters for Data Selection
All methods introduced in Section 3.4 involve clustering embeddings using K-Means. Our starting training dataset CC-dedup contains roughly 600 million documents in total. Running K-Means clustering on all 600 million 768-sized vectors would take a considerable amount of compute. Instead, we follow previous work [1, 47] and randomly sample roughly 100M documents with which to
# 2https://commoncrawl.org/the-data/get-started/
14
calculate centroids. We normalize the embeddings for these 100M documents to have L2-norm of 1.0, and then use faiss [24] with the following parameters:
# faiss.Kmeans(
768 # 125M OPT model embedding size, 11000 # 11K clusters, niter=20 # 20 iterations, verbose=True, seed=0, gpu=False, spherical=True, min_points_per_centroid=1, max_points_per_centroid=100000000
) | 2308.12284#59 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 60 | )
We choose 11000 clusters following previous work [1] and we note that this choice sticks to the heuristic that the number of clusters should roughly be the square root of the number of total points being clustered. We also note that in initial experiments for data selection at the 125M OPT model scale, we did not find a significant effect of number of clusters on the performance of our data selection methods (see Figure A1) this finding agrees with Abbas et al. [1] who notice significant overlap between datasets selected by SemDeDup with different number of clusters (see Figure A2 in Abbas et al. [1]).
â® 10K clusters â@® 1Kclusters 11K clusters â@ 100K clusters â@® 1Mclusters Non-web Snapshots Instruct OPT 1.00x 1.25x 1.67x 2.50x 5.00x infty 1.00x 1.25x 1.67x 2.50x 5.00x__ infty Change in PPL Compared to Baseline Source Dataset Size (}) | 2308.12284#60 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 61 | Figure A1: Effect of number of clusters in K-Means on data selection performance. All models are 125M OPT models, where the training set (and starting source dataset) is C4 and we select data with SSL prototypes. The y-axis is the change in perplexity compared to baseline training, meaning that baseline training is at 0.0, and going down on the graphs indicates better performance. The x-axis is the source dataset size. We show results for average perplexity on Non-web snapshot validation sets (left) and Instruct + Answers (right). We notice that there is not a significant difference when changing number of clusters (e.g. if we drew error bars around each line, they would all be overlapping), but 11K clusters is generally among the top-3 best performing methods. | 2308.12284#61 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 62 | We deliberately set min points per centroids low and max points per centroid high so that faiss does not attempt to manually balance the clusters while doing K-Means. Sorscher et al. [47] found that explicitly class-balancing is important: they introduce the "class balance score" (see Section H of Sorscher et al. [47]) which is the expectation of the quantity size of majority class size of minority class over all pairs of classes. They then set a hard limit for the class balance score of 0.5, meaning that "every class has at least 50% of the images that it would have when pruning all classes equally" [47]. We consider the unsupervised-learning analog of the class-balance score, which we refer to as the "cluster balance" score. The cluster balance score is the expectation of the quantity size of bigger cluster size of smaller cluster over all pairs of clusters. Across all of our data selection methods (and choices for R) we find that this value is generally equal to or bigger than 0.5 without any explicit intervention. For this reason, we do not
15
explicitly cluster balance, although we note that changing how many points are sampled from each cluster (based on properties of the cluster) is very interesting future work. | 2308.12284#62 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 63 | 15
explicitly cluster balance, although we note that changing how many points are sampled from each cluster (based on properties of the cluster) is very interesting future work.
D4 parameters: The choice of parameters Rproto and Rdedup while using D4 will have impact on the performance of D4. Given limited compute, we are not able to sweep over these hyperparameters. Instead, we strategically choose these parameters: we first look at the highest value of R in SemDeDup that results in perplexity improvement across validation sets. We choose the "highest value" because the purpose of SemDeDup is to remove duplicate-driven clusters and low R with SemDeDup generally removes more than just templates/semantic duplicates. As seen in Section A.3, this generally occured with Rdedup = 0.75. Thus, we chose Rdedup = 0.75 and varied Rproto to obtain different data selection ratios for D4.
# A.1.4 Which validation sets go into the averages?
For clarity, we explicitly state the validation sets which we consider "Web Snapshots", "Non Web Snapshots", and "Instruct + Answers" when reporting averages:
Web Snapshots: perplexity on validation set of C4, CC-dedup, CommonCrawl (from the Pile) | 2308.12284#63 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 64 | Web Snapshots: perplexity on validation set of C4, CC-dedup, CommonCrawl (from the Pile)
Non-web Snapshots: perplexity other validation sets from the Pile, comprising of OpenWebText2, HackerNews, Wikipedia (en), BookCorpusFair, DM Mathematics, Gutenberg PG-19, OpenSubtitles, and USPTO. Also included in this average is "redditflattened" (validation set from Pusshift.io Reddit [4]), "stories", "prompts_with_answers" (which is described below) and "prompts" (which is the same as "prompts_with_answers" but where each sample is just the instruction-tuning prompt without the answer).
Instruct + Answers: perplexity on instruction-tuning data from OPT-IML [21], where each sample contains both the instruction-tuning prompt and the answer (in Figure A4 this is referred to as "prompts_with_answers."
While the validation sets in web-snapshots and non-web snapshots are clear (they are either standard open-sourced datasets, or derived from commonly used data), we expect that the "Instruct + Answers" data might be new to some readers. We provide a few examples of what this validation set looks like in Table A2. | 2308.12284#64 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 66 | Instructions: In this task, you are given two phrases: Head and Tail, separated with <sep>. The Head and the Tail events are short phrases possibly involving participants. The names of specific people have been replaced by generic words (e.g., PersonX, PersonY, PersonZ). PersonX is always the subject of the event. You have to determine whether the Head is located or can be found at/in/on the Tail or not. Classify your answers into "Yes" and "No". The phrase may also contain "___", a placeholder that can be an object, a person, and/or an action.Input: Head: PersonX acknowledges gratefully the ___<sep>Tail: to use it Output: No Read the given sentence and if it is a general advice then indicate via "yes". Otherwise indicate via "no". advice is basically offering suggestions about the best course of action to someone. advice can come in a variety of forms, for example Direct advice and Indirect advice. (1) Direct advice: Using words (e.g., suggest, advice, recommend), verbs (e.g., can, could, should, may), or using questions (e.g., why donât | 2308.12284#66 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 67 | suggest, advice, recommend), verbs (e.g., can, could, should, may), or using questions (e.g., why donât youâs, how about, have you thought about). (2) Indirect advice: contains hints from personal experiences with the intention for someone to do the same thing or statements that imply an action should (or should not) be taken. Input: Let it go. Output: yes" Instructions: You are given a sentence in English. Your job is to translate the English sentence into Italian. No! Demand to understand. Ask. Answer: No! Esigete di comprendere. Chiedete. Task: In this task you will be given a list of integers. You should round each integer to the nearest tens place. That means you should round the number to the nearest multiple of 10.Input: [528, -636, -686, 368, -433, 992, 886] Answer: [530, -640, -690, 370, -430, 990, 890] | 2308.12284#67 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 68 | 16
# A.2 Efficiency gains across model scales and training
In this section, we investigate the relationship between model scale, and performance gain obtained by selecting data via D4. Specifically, we train three groups of models: 125M OPT models trained on Ttarget = 3B tokens, 1.3B OPT models trained on Ttarget = 40B tokens, and 6.7B OPT models trained on Ttarget = 100B tokens. We notice in Figure A2 that D4 results in efficiency gains across the board in terms of perplexity. Surprisingly, these efficiency gains seem to increase with scale, indicating that at bigger model scales, D4 might lead to even more efficiency gains. We also see efficiency gains in 0-shot downstream accuracy for 1.3B and 6.7B model scales on the order of 30% for both 1.3B and 6.7B models, but we note that evaluation downstream performance on intermediate checkpoints is not completely fair due to unfinished learning rate schedule. Nonetheless, we see that downstream accuracy efficiency gains are not decreasing with scale.
17 | 2308.12284#68 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 69 | â Baseline â D4 Non Web Snapshots Instruct + Answers (ppl) 500 1200 6.78% faster 1000 +\ 9.13% fast ~ 400 4 3 2.43} ppl 800 6.79| ppl + 300 improvement 2 improvement 8 Ea Z 600 ps o 200 400 a Fi} 200 ~~ ° 100 L 1000 2000 3000 1000 2000 3000 Non Web Snapshots Instruct + Answers (ppl) 704 FABY deter 120 4 12-007 8 60 0.59 ppl 100-4 2.75 ppl 3 4 improvement improvement w = 9 = 804 a s 40 6 a S YL, 40 a 30 5000 10000 15000 20000 5000 10000 15000 20000 Non Web Snapshots Instruct + Answers (ppl) 0-shot Downstream Acc. " 1 52 | 2727 faster 30.0 30 y 7] us \ 11.39% faster 15.42% faster . 1.13% betepâ || . 0.29 ppl 0.51 ppl Sa tls _ 25.04 i ment 3 improvement Fi © 54 = 1 = so 0 A 20.0 we = 17s Fry - 49 10000 20000 30000 10000 20000 30000 20000 25000 30000 35000 Non Web Snapshots Instructions + Answers (ppl) 0-shot Downstream Acc. 25.0 60 250-45 | 2308.12284#69 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 70 | 10000 20000 30000 20000 25000 30000 35000 Non Web Snapshots Instructions + Answers (ppl) 0-shot Downstream Acc. 25.0 60 250-45 34.93% faster > i| 3 al 22|18% faste 225 \ 18,08% faste! > | 2oa%petter vai " 0.39 ppl 200 0.34 ppl 559 His = 2004 improvement a improvement Go { a 21754 2 i} a a < 58 ~~ us 150 hoe = |/ WN 15.0 | 12.5 â 2 57 125 =. 10.0 sx v Â¥ 10000 20000 30000 40000 10000 20000 30000 40000 30000 35000 40000 45000 Number of Updates | 2308.12284#70 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 71 | Figure A2: Training trajectory of OPT models trained on raw data (gray line) and data selected via D4 (pink line). Across model scales (1st row: 8M OPT models trained on 2B tokens, 2nd row: 125M OPT models trained on 3B tokens, 3rd row: 1.3B OPT models trained on 40B tokens, 4th row: 6.7B OPT models trained on 100B tokens), we see significant efficiency gains in both perplexity (left two columns) and 0-shot downstream accuracy on 16 NLP tasks (right column). Importantly, we see that increasing model scale does not decrease efficiency gains. All plots show mean and standard error across three seeds, except for the last row. We do not evaluate downstream accuracy for models smaller than 1.3B because they are likely too close to random performance to indicate whether a particular data selection method is better.
# Individual Breakdowns of Downstream Accuracy and PPL
In Section 4, we see that D4, SSL prototypes, and SemDeDup achieves significant gains on perplexity (averaged across different validation sets) and downstream accuracy (averaged across different NLP tasks) compared to baseline training. Further, we generally see that D4 outperforms SSL prototypes and SemDeDup. In this section, we provide a more fine-grained analysis of these claims across individual tasks. | 2308.12284#71 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 73 | âs baseline â*â semdedup âsâ ssl_prototypes ââ D4 | |__â@ 50.0 61.0 4 33.0 5s 35 das daso Sco dao 2 2 2 / 2 x0 A 7 as sas s as â 400 59.0 aus 20 ans ses 100 0.80 0.60 040 020 0.00 100 080 060 0.40 020 000 100 080 060 0.40 020 0.00 100 0.80 060 040 0.20 0.00 40 370 n 456 36.5 as 36.0 n 454 VA z LY z B30 = SS Pass gn a 2 2 § 3? § 3350 i i 225 [= i «© \/ 450 WA oi Nea 20 34.0 be 448 4 ââ as 335 10 0.80 0.60 040 020 0.00 100 080 0160 040 020 000 100 080 060 0.40 020 0.00 10 0.80 060 040 0.20 0.00 62.0 eae 696 665 oo 100 080 0, oo 6 6s 080 0.60 040 020 080 0.60 0.40 020 0.00 Selection Ratio (R) 0.00 1.00 0.80 080 0.60 0.40 | 2308.12284#73 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 74 | Figure A3: Per-task breakdown of 0-shot downstream accuracy comparison across data selection methods, for 1.3B, 40B OPT model. For a description of the 16 NLP tasks shown above, see Section 3.4. We note that there is considerable variability across individual downstream tasks.
worsens with data selection compared to baseline training, and that D4 generally has the slowest rate of performance degradation. We note that, across all non web-snapshot validation sets, there is no clear winner among data selection methods. We emphasize however that we observe consistent improvement over baseline training on most validation sets we use â for example in Figure A4 we observe that, when selecting tokens from a 1.25x source dataset, all data selection methods improve over baseline across all validation sets except C4 and CC-dedup (however, as we explain in Section 4.4, this decrease in performance on C4 and CC-dedup is expected).
For downstream accuracy, we chose to match the exact downstream evaluation done in Zhang et al. [59] since we use OPT architecture and hyperparameters. Similar to Zhang et al. [59], we notice considerable variability across the 16 NLP tasks in Figure A3, motivating us to look at the mean downstream accuracy across tasks.
19 | 2308.12284#74 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 75 | 2650 ZA 12.525 t 2645 t 12.500 / 16.40 -< nas VA 310 36.35 if. 22450 Va VA a KAT Les At? 16 1625 12.400 360 37s 1620 V7, 16s Le Wi -~ iad 12.325 100 080 060 040 020 0.00 100 080 060 0.40 0.20 0.00 20 0.00 100 0.80 0.60 0.40 020 0.00 19.20 4.750 a9.as ans 19.10 4.700 19.05 4.675 19.00 4.650 18.95 4.625 18.90 18.85 4.600 18.80 136 4575 100 080 060 040 020 0.00 100 080 060 0.40 0.20 0.00 100 080 0.60 0.40 0.20 0.00 100 0.80 0.60 0.40 020 0.00 23,10 1390 23.05 +4 90 116 23.00 13.85 89 ana 22.95 13.80 g E2290 13.75 \ nas 4 a = 22.80 Na IOS ET» \ 100 080 060 040 020 0.00 100 080 0.60 0.40 0.20 0.00 100 080 0.60 0.40 0.20 0.00 100 080 0.60 0.40 020 0.00 210 12.05 N\ | 2308.12284#75 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 78 | Figure A5 shows the overlap between datasets selected by SemDeDup and SSL Prototypes. While the two methods do not arrive at the same set of data points, there is a significant overlap between the datasets curated by the two methods. We hypothesize that this is because both SSL prototypes and SemDeDup prune away dense regions of space surrounding cluster centroids: by definition, SemDeDup sparsifies dense regions of space within a cluster; similarly, by definition, SSL prototypes will prune away datapoints close to the cluster centroids. Since K-means clustering places centroids in dense regions of space (see Figure A6 where we observe that the distribution of cosine distances to cluster centroid is skewed right), we know that the regions of space surroundings centroids will be dense, and expect SSL prototypes and SemDedup to have significant overlap. Qualitatively, we inspect a few examples of points close to cluster centroids in Figure A3, Figure A4, Figure A5, and see that examples close to cluster centroids can be semantically redundant (e.g. templates). Therefore, it makes sense that any reasonable data selection strategy would prioritize sparsifying these dense regions of space surrounding cluster centroids. As mentioned in Section 3.4, sparsifying these dense regions of space containing excessive semantic duplicates is the original motiviation behind D4. As
20 | 2308.12284#78 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 79 | 20
Source Dataset Size: 4x (R= 0.25) |, Source Dataset Size: 2x (R = 0.5) 100 Source Dataset Size: 1.33x(R=0.75) 1, 3 - 100.00 90 3 - 100.00 3 - 100.00 = 100,00 100.00 90 semdedup = semdedup ssl_proto Bunrasiaqul eyed Bulured Jo % Bunrasiaqul eyed Bulured Jo % Bunrasiaqul eyed Bulured Jo % random random 75 y 1 y 1 y y 1 D4 semdedup ss!_proto random semdedup ss|_proto random D4 semdedup ss! proto random
Figure A5: Similarity between data selection methods. Each square represents the percentage of training data that is intersecting, when selecting data via two different strategies. The x and y axis enumerate different data selection strategies.
shown in Figure 7, omitting the re-clustering step significantly worsens performance, and we observe in the rightmost plot of Figure 7 that SemDeDup indeed removes duplicate-driven clusters.
Distribution of cosine distance to cluster centroids 1e6 Count 0.2 0.4 0.6 0.8 Distance to cluster centroid | 2308.12284#79 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 81 | # Investigating Train-Validation overlap
As briefly described in Section 4.4, we observe that many of our validation sets are close (in cosine distance) to our training sets, and the impact of data selection is varies across individual validation sets. Individual validation sets live in different regions of the embedding space, and as such they are affected differently by data selection. For example, one could imagine that web-snapshot validation sets such as C4 is close to CC-dedup in the embedding space, while esoteric validation sets (such as Gutenberg PG 19 or DM Mathematics) might be far. To quantify this, we first find the nearest neighbors in the training set to each validation point in all of our validation sets. We then qualitatively check (see Table A8 and Table A9 for examples) that nearest-neighbors in the training set truly convey information about validation points. we observe significant overlap between training points and validation points. We then quanitatively analyze how close each validation set is to the training set: in Figure A12, we show the breakdown of this distribution for each validation set. We see a general trend, that web-snapshots validation sets are closest to the training set as they are skewed to the right, while more esoteric validation sets (Gutenberg, or Wikipedia (en)) are more centered or even slightly left-skewed. | 2308.12284#81 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 82 | Motivated by this, we compare validation sets side-by-side (in terms of distance to training set) in Figure 5, and we see a similar trend. To further understand why different validation sets are affected differently by data selection, we loop through each data point in the validation set and record:
21
⢠distance to the training set e.g. how close is the validation point to the training set
⢠perplexity difference before and after data selection with D4 e.g. how much was this validation point affected by data selection
⢠original perplexity e.g. how easy was this data point originally
In Figure A11, we observe an interesting trend: for web-snapshot validation sets such as C4, the validation points closest to the training set are both (1) the easiest (lowest perplexity) points before data selection and (2) the points most affected by data selection. This seems to indicate that these validation points are "easy" due to their proximity to training points, and when these training points are removed from the training set due to data selection, the close-by validation points become difficult for the model. We do not see this trend on non-web snapshot validation sets such as DM Mathematics and Open Subtitles; in fact, we see an opposite trend where points furthest from the training set are generally most affected by data selection. | 2308.12284#82 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 83 | As a sanity check, we change the sizes of validation sets used to plot Figure 5 in Section 4.4. We see in Figure A8 that controlling for validation set size, we get the same jump going from web-derived to web-independent validation sets. In running this experiment, we are forced to randomly sample if the particular validation set is too big; to ensure that such random sampling does not change the distance to nearest neighbor in the training dataset too much, we vary the amount we sample for three differently sized datasets in Figure A7. We observe that changing the amount we randomly sample from a validation set does not significantly change the mean distance to nearest neighbor in train.
We also investigate whether the differences between validation sets in Figure 5 is due to training set size. We would expect that smaller training sets are "further" from validation sets, since (). Indeed we see this in Figure A9. However, we observe that the relative ordering of validation sets (with respect to average distance to the training set) remains the same for any fixed training dataset size. Moreover, we see in Figure A10 that the relative ranking of all validation sets as well as the jump from web-derived to web-independent validation sets from the original Figure 5 holds, even as we reduce training dataset size. | 2308.12284#83 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 84 | âe c4 â*â DM _Mathematics âeâ OpenSubtitles Changing validation set sizes 0.45 4 \ 0.40 4 0.30 4 0.25 4 0.20 4 0.9%00090990900000000000009000800000000000000000000000000000000000000000 00090200808 00GCOF09008000900 T T T T T T 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of Validation Set 2
Mean distance to train nearest neighbor
Figure A7: Studying the effect of validation set size on cosine distance to nearest-neighbor in training set. On the x-axis, we vary the size of the validation set (by randomly sampling the original larger validation set), and the y-axis represents distance to nearest neighbor in the training set (averaged across the validation set). We observe that regardless of what fraction of the original validation set is sampled, the mean distance to the nearest neighbor in train does not change, indicating that Figure 5 is not due to different validation set sizes.
22
# rt
# a
# rt
Figure 5, with each validation set the same size (50 points) | 2308.12284#84 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 85 | 22
# rt
# a
# rt
Figure 5, with each validation set the same size (50 points)
0.74 rT £ 06+ id Ee © _ > 05-4 2 _ _ 2 > | â| ~ oa + _ 8 ⬠_ $034 x a ry z _ | By a a £ 024 rN a ° A 8 + + 01-4 a âL 1 1 1 fs es & & ss
Figure A8: Investigating whether Figure 5 changes if we control for validation set size. In the Figure above, each validation set contains 50 data points, which is the size of the smallest validation set we use (BookCorpusFair). If a validation set is bigger than 50 data points, we randomly sample the validation set to obtain 50 data points.
# c4
# DM_Mathematics
# OpenSubtitles
# âe
â*â
â*
Distance to Nearest Neighbor in Train vs. Training Set Size Mean distance to train nearest neighbor ~o T T 1 105 10+ 103 102 1o-? 10° Fraction of Training Set | 2308.12284#85 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 86 | Distance to Nearest Neighbor in Train vs. Training Set Size Mean distance to train nearest neighbor ~o T T 1 105 10+ 103 102 1o-? 10° Fraction of Training Set
Figure A9: Studying the effect of training set set size on cosine distance to nearest-neighbor in training set. On the x-axis, we vary the size of the training set (by randomly sampling the original training set), and the y-axis represents distance to nearest neighbor in the training set (averaged across the validation set). We observe that cosine distance to the training set increases with smaller training sets, but the relative ordering of validation sets (with respect to mean distance to training set) remains the same.
23
# F
# r
# b
# L
# r
Frac of Training Data: 1e-05 Frac of Training Data: 0.0001 Frac of Training Data: 0.001 Frac of Training Data: 0.01 Frac of Training Data: °. °. 02 L Cosine Distance to NN in Train os °. °. os 08 07 06 os 04 03 02 oa 00 | 2308.12284#86 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 88 | DM_Mathematics OpenSubtitles c4 = = 20.0 s ra 10.04 ra ra 20.0 ° 0.0 ° 0.0 ° 0.0 4 z 1.60 z & 3.25 iS g E270 4 ids oa L z ir 5 150 = 2.60 = 3.00 Loblolly â. 2 2 250] 2 2s I & 3 5 275 -0.02 44 0.00 0.07 , 2 2 2 ] = -0.03 = 0.01 = 0.06 ; a \ AN a a | 5 Hy 5 & 0.05 : 5 -0.05 5 70.02 5 vy g g g | eo ⬠¢ 0.04 gee H 5 -0.03 | a 5 1 5 0.07 5 Wy 5 0.03 Wy E -0.08 i E -0.04 z l 4 0.02 -0.09 0.0 0.1 0.2 0.3 04 0.5 0.6 0.7 08 0.0 0.1 0.2 0.3 04 0.5 06 0.7 08 0.0 0.1 0.2 0.3 04 0.5 0.6 0.7 0.8 Cosine Distance to NN in train (binned) Cosine Distance to NN in train (binned) Cosine Distance to NN in train | 2308.12284#88 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 90 | Figure A11: (Top): Histogram of cosine distance to nearest neighbor in train. Within each bin, we show the mean original perplexity (middle) and mean difference in perplexity after data selection (bottom), for DM_Mathematics (left), OpenSubtitles(middle), and C4 (right). We note that points in the C4 validation set closest to the training set are both "easy" (perhaps because of proximity to training points) and are affected the most by data selection. We do not see this trend for non-web snapshot validation sets such as DM_Mathematics and OpenSubtitles.
24
0.1 | 2308.12284#90 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 91 | BookCorpusFair DM_Mathematics HackerNews OpenWebText2 10.0% 20.0% 12.5% 15.0% 10.0% 8.0% 15.0% 15% 6.0% 10.0% 4 10.0% 4 5.0% 4.0% 5.0% 4 25% 2.0% 5.0% 9) 0.0% 0.0% L 0.0% 4 0.0% 0 02 04 06 01 : 00 02 04 06 08 0 8 00 02 04 06 08 g eB @ i 2 Wikipedia_en dialogue_knowledge prompts_with_answers stories c 20.0% 10.0% g 12.5% 10.0% MH â 10.0% 15.0% 8.0% 2.0% S 3 < 75% 6.0% 6.0% a 10.0% = 5.0% 4.0% 4.0% & 8 25% 5.0% 2.0% 20%-+4 s . 0.0% + 0.0% 0.0% + 0.0% 5 00 02 04 06 08 00 02 04 06 08 00 02 04 06 08 00 02 04 06 08 8 5 2 CommonCraw! Gutenberg PG-19 OpenSubtitles USPTO 5 a 20.0% 20.0% 3 | 2308.12284#91 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 92 | 00 02 04 06 08 00 02 04 06 08 8 5 2 CommonCraw! Gutenberg PG-19 OpenSubtitles USPTO 5 a 20.0% 20.0% 3 10.0% 20.0% . ° 15.0% 4 15.0% 8.0% 2 15.0% 4 âa 0.0% 6.0% 8 10.0% 4 . 10.0% 4 4.0% § 5.0% 4 5.0% 5.0% 4 FA 2.0% 4 : o £ 0.0% 0.0% 0.0% 0.0% + = 0.0 02 04 06 08 00 02 04 06 08 0.0 02 04 06 08 00 02 04 06 08 ⬠3 4 prompts redditflattened is) 20.0% 20.0% 20.0% 15.0% 15.0% 15.0% 10.0% 4 10.0% 10.0% 4 5.0% 4 5.0% 5.0% 4 0.0% 0.0% 0.0% + 00 02 04 06 08 00 02 04 06 08 00 02 04 06 08 0.8 Cosine Distance to Nearest Neighbor in Train | 2308.12284#92 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 93 | Figure A12: Distribution of cosine distance to nearest neighbor in the training set, for each individual validation set.
# A.6 Further investigation of repeating tokens
In this section, we investigate whether the findings from Section 4.2 hold across model scale, data selection ratio (e.g. number of epochs), and data selection method.
Across data selection methods: We first take the same configuration as Section 4.2, where we have a starting source dataset of 40B tokens, use each of our data selection methods with R = 0.25 to select a subset of documents, and repeat over these documents until we reach the target token budget of 40B tokens. Note that this is at the 1.3B model scale. In Figure A13 we see that repeating data selected by both SemDeDup and SSL prototypes also outperforms randomly selecting new data. However, we quickly notice that for fixed data selection strategy (e.g. fixed column in Figure A13), repeating tokens either outperforms or matched selecting new tokens. In other words: cleverly repeating tokens can outperform randomly selecting new tokens, but if we fix the data selection strategy (random, SemDeDup, SSL prototypes, or D4) then it is usually preferable to select new tokens. We also note in Figure A16 that D4 outperforms other methods, although by a smaller margin than in the fixed-compute regime. | 2308.12284#93 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 94 | Across model scale and data selection ratio: We fix our data selection strategy as D4 as done in Section 4.2, but attempt repeating tokens across 3 model scales (125M, 1.3B, and 6.7B), and across
25
â Random (New Tokens) â D4 (New Tokens) â SSL proto (New Tokens) â SemDeDup (New Tokens) ---- Random (Repeated Tokens) ---- D4 (Repeated Tokens) ---- SSL proto (Repeated Tokens) ---- SemDeDup (Repeated Tokens) off aot off \ . \ N IN \ z 18 E 0 [ Non Web Snapshots ppl PPL PPL ans uo 165 16.0 zo000 20000 = 30000 zoo00 20000 ©=â«30000 aod00 â20000-ââ«30000 aodo0 20000-30000 Random ba SSL Proto SemDedup wT | = Lf] PPL Instruct + Answers ppl zoo00 20000-30000 zoo00 20000 = 30000 aod00 â-20000~ââ«30000 aoo00â20000~ââ«30000 Num Updates | 2308.12284#94 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 95 | Figure A13: Effect of repeating tokens across data selection methods over training. X-axis denotes the number of updates, and the y-axis denotes average perplexity across non-web-snapshot validation sets (top row) and Instruct OPT (bottom row). Each column in the plot above denotes a different data selection method. Within each column: (1) the gray line denotes baseline training, (2) the colored-dashed line denotes repeating tokens via the specified data selection method, and (3) the colored-solid line denotes selecting new tokens via the specified data selection method. Repeating data is generally worse than selecting new data for a fixed data selection method (e.g., fixed column).
data selection ratios (R = 0.5 and R = 0.25). We see in Figure A15 that repeating data with D4 outperforms randomly selecting new tokens across all model scales and choice of R. | 2308.12284#95 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 96 | We note that for fixed R, different data selection methods will choose subsets of the source dataset that contain different amounts of tokens. This means that different data selection methods will epoch a different number of times. For example, for a 1.3B OPT model 40B token budget training run, if randomly repeating data with R = 0.25 chooses a subset with 10B tokens and D4 with R = 0.25 chooses a subset with 15B tokens, then the random run will epoch 4 times while the D4 run will epoch 2.67 times. To show this more clearly, we plot 1.3B and 6.7B repeated data runs with the x-axis changed to number of epochs in Figure A14. We see that up to roughly 2 epochs of data chosen with D4 significantly outperforms randomly selected new data; however, close to 5 epochs leads to worse performance.
Random (repeated tokens) âe~ D4 (repeated tokens) 1.3B, Non Web Snapshots 1.3, Instruct + Answers (ppl) 6.7B, Non Web Snapshots 6.78, Instruct + Answers (ppl) 1s as \ was \ 10.950 2 Bao 0.925 1295 {5 âfiosas Mo le Number of Epochs PPL | 2308.12284#96 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 98 | 26
Random (repeated tokens) â@®-â D4 (repeated tokens) Web Snapshots Non-web Snapshot Instruct + Answers (ppl) w 3 el § =e Sea 2 a â [= rl A Ss 1.00 0.80 0.60 0.40 0.20 0. 2 156 og 16.8 14.75 S Ps [714.50 ze 6 14.25 â â}_}_}â Gal 4.00 a T t T 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 2 ee 1.00 @ 1215 3.2 S 2 10.95 2 12.10 13.1 S 12.05 10.90 ; 13.0 2 12.00 10.85 ° ne SU nS tT T 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Selection Ratio (R)
# a | 2308.12284#98 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 99 | # a
Figure A15: Comparison of repeating tokens with D4 (pink line), randomly selecting new tokens (horizontal dashed gray line), and randomly repeating data (gray line). We see across model scales (top: 125M trained on 3B tokens; middle: 1.3B trained on 40B tokens; bottom: 6.7B trained on 100B tokens) and data selection ratios, repeating data selected by D4 outperforms randomly selecting new data.
a a
Random (repeated tokens) â®- SemDeDup (repeated tokens) â®-â SSL Prototypes (repeated tokens) â@®-â D4 (repeated tokens) Non-web snapshots Instruct + Answers (ppl) 31.75 = 39 | ee ey 31.50 â- Sâââ 38 31.25 37 31.00 36 30.75 35 30.50 a a s a | 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 Selection Ratio (R) | 2308.12284#99 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 100 | Figure A16: Comparison data selection methods when repeating data at the 125M, 3B token budget scale. The x-axis is data selection ratio R, and the y-axis is average perplexity on validation sets. We observe that selecting data to repeat via D4 outperforms other data selection methods, especially at low selection ratios R (note that low selection ratios in the fixed-data regime correspond to more epochs).
# A.7 Choice of Embedding Space
All data selection methods we employ rely heavily on the quality of the underlying embedding space. We qualitatively analyzed the embedding produced by the last-token last-layer OPT 125M model and observed a bias towards end-of-document format. For example, if documents all end with an email or a standard phrase ("Buy our product today!"), then these documents would be clustered together. This likely helps detect templates (since templates tend to end their text in very similar ways), but has
27
clear pitfalls â for example, if we took thousands of wikipedia articles about unrelated topics and appended the same email at the end of each article, they might be clustered together.
Motivated by this, we briefly experiment with different embedding spaces and discuss our results in this section.
# A.7.1 SentenceTransformer models | 2308.12284#100 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 101 | Motivated by this, we briefly experiment with different embedding spaces and discuss our results in this section.
# A.7.1 SentenceTransformer models
BERT embeddings have generally been used to accomplish various NLP tasks, because BERT (unlike GPT/OPT) is able to attend to every token in the input when producing an embedding (BERT is a encoder-decoder model, while OPT/GPT are decoder only). While there are numerous BERT-style models available, we hoped to achieve an embedding space that focused on semantic similarity. Thus, we opted to use the widely popular SentenceTransformer models 3, which are BERT-style models finetund specifically >1B text similarity pairs. We choose the top model on the SentenceTransformer leaderboard (all-mpnet-base-v2) and the smallest well-performing model (all-Mini-LM-v6). Note that these models have max context length of 256 and 384 (respectively), and we stuck with the SentenceTransformer default of truncating inputs to fit the max sequence length (i.e. these embeddings only consider the beginning of documents). | 2308.12284#101 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 102 | We observe, in Figure A17 that at small model scales, sentence transformer embedding spaces outperforms the OPT embedding space. Given these initial results, we took our most overall-all efficient embedding space at the 1.3b model scale ("all-mini-lm-v6") and ran a 6.7b training run with it. Surprisingly, we observed that at larger model scale, the OPT embedding space outperforms the "all-mini-LM-v6" embedding space. Given that the difference between "all-mini-LM-v6" and "all-mp-net-base-v2" is generally small (see Figure A17), we also expect the OPT embedding space to beat "all-mpnet-base-v2" at the 6.7b, although we were not able to complete this run due to compute restrictions. We see the same trend when we consider overall and naive efficiency of using D4 with different embedding spaces in Figure A18. | 2308.12284#102 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 103 | In an effort to understand why SentenceTransformer embedding spaces perform worse at larger model scales, we qualitatively analyze the clusterings with each SentenceTransformer embedding space. We find that using D4 with "all-mp-net-base-v2" and "all-mini-lm-v6" disproportionately prunes long documents. We hypothesize that this is because sentence transformer models are trained and finetuned on actual sentence pairs, which very rarely saturate the max context length of the model. This might result in all "long" documents (or at least any input that is max-context-length size) seem out-of-distribution to the model. We guess that this results in long documents being clustered together, and therefore disproportionately affected during pruning. This might be especially relevant in domains like Wikipedia articles, where headers and introductions look semantically similar, but the actual content (past the first max-context-length tokens) is very different.
In an effort to circumvent this problem, we tried two approaches at a small model scale:
⢠M1: Chunking long documents into max-context-length chunks, and averaging all-mini- LM-v6 embeddings across chunks to produce a final document embedding. | 2308.12284#103 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 104 | ⢠M1: Chunking long documents into max-context-length chunks, and averaging all-mini- LM-v6 embeddings across chunks to produce a final document embedding.
⢠M2: Using Contriever [22] embeddings, where we chose the Contriever model because it is trained to determine if two sentences are from the same document, and therefore should be agnostic to position within a document.
Both in terms of perplexity improvement at the end of training (see Figure A19) and efficiency (see Figure A18) we do not observe a significant difference between the OPT embedding space and embedding spaces M1 and M2 at the small model scale (125 million parameters). We note that M1 and M2 are significantly worse than the all-mp-net-base-v2 and all-mini-LM-v6 at small scales and suffer from the same problem of pruning away long documents (compared to the OPT embedding space), so we expect these models to under-perform the OPT embedding space at the 6.7b scale.
# 3https://www.sbert.net/docs/pretrained_models.html
28
âeâ all mp net base v2
# âe
# all mini lm v6
âeâ
# OPT | 2308.12284#104 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 105 | 28
âeâ all mp net base v2
# âe
# all mini lm v6
âeâ
# OPT
Non Web Snapshots Instruct+Answers (ppl) 94 + TS ee Se a | 92 = 110 90 105 88 100 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 315 een ee | = 31.0 in S 30.5 30.0 4 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 a a SS | 14.2 == 16.2 14.0 a â 4 16.0 13.8 15.8 13.6 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Non Web Snapshots Instruct+Answers (ppl) SS ee eee 08 â 13.0 107 2129 6 10.6 12.8 10.5 12.7 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Selection Ratio (R) | 2308.12284#105 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 106 | Figure A17: Perplexity (y-axis) versus selection ratio R (x-axis) for different embedding spaces, when selecting data via D4. Across different 8m (top), 125m (middle) and 1.3b (bottom) model scales, we see that the SentenceTransformer embedding spaces outperform the OPT embedding space, but at the 6.7b model scale, we see that the OPT embedding space begins outperforming the all Mini LM v6 embedding space. We were unable to run an "all-mp-net-base-v2" 6.7b experiment due to compute restrictions, but we note that the difference between "all-mini-lm-v6" and "all-mp-net-base-v2" across model scales and selection ratios is generally small, so we expect the OPT embedding space to outperform the "all-mp-net-base-v2" at the 6.7b scale.
29 | 2308.12284#106 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 107 | 29
Non Web Efficiency Instruct + Answers Efficiency N 3 N a -@- OPT -@- all minilm v6 -#- all mp net base v2 -@- avg chunk, all mini lm v6 -*- contriever N 8 a a ° 10 w Efficiency Gain (% Compute Saved) Efficiency Gain (% Compute Saved) ° ° 10â 108 10° 10â 108 10° Model Size (Log Scale) Model Size (Log Scale)
Figure A18: Comparison of naive efficiency for different embedding spaces, when using D4 as the data selection strategy. Similar to Figure A17, we see that all-mini-LM-v6 outperforms the OPT embedding space at small scale, but not at large (6.7b) model scale.
âeâ OPT âeâ avg chunk, all mini Im v6 âeâ contriever Non Web Snapshots Instruct+Answers (ppl) 39.0 -âââ___ 31.6 38.5 31.4 38.0 37.5 _j 312 37.0 a a 36.5 31.0 36.0 30.8 35.5 35.0 1.00 0.80 0.60 0.40 0.20 0.00 1.00 0.80 0.60 0.40 0.20 0.00 Selection Ratio (R) | 2308.12284#107 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 108 | Figure A19: Comparison of embedding spaces M1 (averaging embedding of all-mini-LM-v6 across all chunks in a document, where a chunk is defined as 256 tokens) and M2 (embeddings from the Contriever model), with the OPT model embedding space, when using D4 as a the selection strategy. We note that neither embedding space signifigantly outperforms the OPT model embedding space at the 125M scale.
30
# A.8 Replicating Fixed Compute Results on C4
In this section, we briefly show our results for comparing data selecting methods at the 125M scale, where the pre-training dataset is the C4 [41] dataset instead of CC-dedup. We see in Figure A20 that D4 generally outperforms other methods. These initial experiments motivates us to try comparing data selection methods on more heavily filtered web-data (i.e. CC-dedup).
â® SSL Prototypes â®- SemDeDup âe D4 2 Non-web snapshots Instruct OPT 8 ° 0.0 2 3 -0.5 8 g 71 5 -1.0 iS} g -15 = 2 E | -2.0 5 1.00 080 060 040 0.20 0.00 1.00 080 060 040 0.20 0.00 & Selection Ratio (R) | 2308.12284#108 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 109 | Figure A20: Comparison of data selection strategies with the OPT model embedding space, when using D4 as a the selection strategy, when using C4 as the starting training dataset. The x-axis is selectoin ratio R, and the y-axis is perplexity difference compared to baseline (the horizontal gray dotted line at 0.0 represents our baseline i.e. when no data selection is done), so lower is better. Notice that D4 and SemDeDup match at 90%, because we use Rdedup = 0.9 and varied Rproto for this experiment.
31
# Investigating Duplicate-Driven Clusters
In this subsection, we present a few examples of duplicate-driven clusters, which are clusters that are very dense and near centroids. We find that these clusters tend to be filled with semantic duplicates and/or duplicated text. We generally can find such extreme duplicate-driven clusters by looking at clusters whose standard deviation of cosine distance to cluster centroid is less than 0.03. This is essentially looking at clusters in the lower tail of the empirical CDF in Figure 7 (brown line). We present a few examples of such clusters below:
Table A3: Nearest Neighbors to Cluster Centroid 682 | 2308.12284#109 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 110 | Table A3: Nearest Neighbors to Cluster Centroid 682
Cosine Distance to Centroid Raw Text 0.03581655 0.03584063 0.036803484 0.037270606 The USGS (U.S. Geological Survey) publishes a set of the most com- monly used topographic maps of the U.S. called US ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. The USGS (U.S. Geological Survey) publishes a set of the most com- monly used topographic maps of the U.S. called US ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. The USGS (U.S. Geological Survey) publishes a set of the most com- monly used topographic maps of the U.S. called US ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well. Search Near Clinton County, OH: Trails National and State Parks City Parks Lakes Lookouts Marinas Historical Sites The USGS (U.S. Geolog- ical ......... may have differences in elevation and topography, the historic weather at the two separate locations may be different as well.
Table A4: Nearest Neighbors to Cluster Centroid 975 | 2308.12284#110 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 111 | Table A4: Nearest Neighbors to Cluster Centroid 975
Cosine Distance to Centroid Raw Text 0.011662006 0.012483656 0.012564898 0.012756169 The American Way, Inc. The American Way, Inc. is a suspended Californian business entity incorporated 19th August 1949. is listed as ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore John St-Amour, Inc. John St-Amour, Inc. is a suspended Californian business entity incorporated 5th October 1962. is listed as the agent ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore Joseph E. Barbour, Inc. Joseph E. Barbour, Inc. is a suspended Califor- nian business entity incorporated 27th January 1959. is listed as ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore The Jolly Boys, Inc. The Jolly Boys, Inc. is a suspended Californian business entity incorporated 4th March 1955. is listed as ......... for bulk data downloadsI want to request the removal of a page on your websiteI want to contact California Explore
32
Table A5: Nearest Neighbors to Cluster Centroid 10715 | 2308.12284#111 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 112 | 32
Table A5: Nearest Neighbors to Cluster Centroid 10715
Cosine Distance to Centroid Raw Text 0.035506427 0.036230028 0.036280274 0.036827266 Search hundreds of travel sites at once for hotel deals at Hotel Olympic Kornarou Square 44, Heraklion, Greece 34 m Bembo Fountain 262 ......... hundreds of travel sites to help you find and book the hotel deal at Hotel Olympic that suits you best. Search hundreds of travel sites at once for hotel deals at Hotel Estrella del Norte Juan Hormaechea, s/n, 39195 Isla, Cantabria, ......... travel sites to help you find and book the hotel deal at Hotel Estrella del Norte that suits you best. Search hundreds of travel sites at once for hotel deals at H10 Costa Adeje Palace Provided by H10 Costa Adeje Palace Provided ......... travel sites to help you find and book the hotel deal at H10 Costa Adeje Palace that suits you best. Search hundreds of travel sites at once for hotel deals at Hotel Miguel Angel by BlueBay Calle Miguel Angel 29-31, 28010 ......... sites to help you find and book the hotel deal at Hotel Miguel Angel by BlueBay that suits you best.
Table A6: Random Examples from Cluster 695 | 2308.12284#112 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 113 | Cosine Distance to Cluster Centroid Raw Text 0.044178426 0.056984067 0.0534693 0.06892538 0.07246786 0.07147932 Eastern Florida State College nutritional sciences Learn about Eastern Florida State College nutritional sciences, and registering for electives. Which college degrees ......... System (IPEDS). If any stats on Hagerstown Community College career planning are incorrect, please contact us with the right data. Albany State University introduction to business Find info con- cerning Albany State University introduction to business, and registering for elective discussion sections ......... If any stats on Warren County Community College plant science major are incorrect, please contact us with the right data. Baldwin Wallace University cost per unit Learn about Baldwin Wallace University cost per unit, submitting required application forms, and follow-up scheduling. ......... (IPEDS). If any stats on San Jose State nursing degree programs are incorrect, please contact us with the right data. Niagara University managerial accounting Information about Niagara University managerial accounting, and registering for elective lectures. Which college degrees give you the ......... Sys- tem (IPEDS). If any stats on Midwestern University pharmacy tech program are incorrect, please contact us with the right data. | 2308.12284#113 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 114 | Which college degrees give you the ......... Sys- tem (IPEDS). If any stats on Midwestern University pharmacy tech program are incorrect, please contact us with the right data. Fanshawe College app download Learn about Fanshawe College app download, and registering for elective discussion sections and seminars. Which college degrees ......... Data System (IPEDS). If any stats on Stratford University cell biology are incorrect, please contact us with the right data. Standish Maine Licensed Vocational Nurse LVN Jobs Find out about Standish, ME licensed vocational nurse LVN jobs options. Itâs a smart ......... (IPEDS). If any stats on William Jewell College medical insurance coding are incorrect, please contact us with the right data. | 2308.12284#114 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 116 | 0.027729392 0.036407113 0.017463684 0.02616191 0.028420448 0.037917078 Seenti - Bundi Seenti Population - Bundi, Rajasthan Seenti is a medium size village located in Bundi Tehsil of Bundi district, Rajasthan ......... 6 months. Of 186 workers engaged in Main Work, 63 were cultivators (owner or co-owner) while 0 were Agricultural labourer. Kodunaickenpatty pudur - Salem Kodunaickenpatty pudur Pop- ulation - Salem, Tamil Nadu Kodunaickenpatty pudur is a large village located in Omalur Taluka of ......... 6 months. Of 3523 workers engaged in Main Work, 1500 were cultivators (owner or co-owner) while 1533 were Agricultural labourer. Chhotepur - Gurdaspur Chhotepur Population - Gurdaspur, Pun- jab Chhotepur is a medium size village located in Gurdaspur Tehsil of Gurdaspur district, Punjab ......... 6 months. Of 677 workers engaged in Main Work, 123 were cultivators (owner or co-owner) while 142 were Agricultural | 2308.12284#116 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 117 | district, Punjab ......... 6 months. Of 677 workers engaged in Main Work, 123 were cultivators (owner or co-owner) while 142 were Agricultural labourer. Maksudanpur - Azamgarh Maksudanpur Population - Azamgarh, Uttar Pradesh Maksudanpur is a small village located in Sagri Tehsil of Azamgarh district, Uttar ......... 6 months. Of 22 workers engaged in Main Work, 14 were cultivators (owner or co-owner) while 0 were Agricultural labourer. Karambavane - Ratnagiri Karambavane Population - Ratnagiri, Maharashtra Karambavane is a medium size village located in Chiplun Taluka of Ratnagiri district, Maharashtra ......... 6 months. Of 444 workers engaged in Main Work, 116 were cultivators (owner or co-owner) while 214 were Agricultural labourer. Barda - Purba Medinipur Barda Population - Purba Medinipur, West Bengal Barda is a large village located in Egra - I Block ......... 6 months. Of 1182 workers engaged in Main Work, 278 were cultivators (owner or co-owner) while 252 were Agricultural | 2308.12284#117 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 120 | 0.0(original validation text) Offers two child care opportunities to Charles County citizensâ the Port Tobacco Onsite Child Care Program and the Before and After School Child Care Program (BASCC). Supports parents through home visits to first time parents and by helping them search for child care, find resources for a child with social, emotional . . . . . . . . Special needs kids. Free to look, a fee to contact the providers. Hotline is staffed by highly-trained and friendly Child Care Consumer Education Specialists who offer both parents and providers invaluable information about child care, and referrals to local Child Care Resource and Referral agencies where they can receive individualized assistance. Child Care Options is a program of Options Community Services , a non-profit registered charity dedicated to making a difference in the South Fraser Region. Options is committed to empowering individuals, supporting families and promoting community health. Funding for Child Care Options is provided through British Columbiaâs Ministry of Children . . . . . . . . Rock. Child Care Options links families and child care providers in the communities of Delta, Surrey and White Rock by offering free consultation, support and child care referral services and subsidy support to parents seeking child care. Child care providers are supported through information, | 2308.12284#120 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 121 | and White Rock by offering free consultation, support and child care referral services and subsidy support to parents seeking child care. Child care providers are supported through information, outreach, resource library, networking, and learning opportunities. Below are links to child development resources, both from within the department and from external sources. Child Development Division Publications Publications that can help you will help you follow your childâs development (from birth to age five) so you can identify and address any issues early on. Resources to help you understand childrenâs . . . . . . . . families to local resources and services. Specialists are available from 9 AM to 6 PM Monday â Friday. Services are confidential. Caregivers can also visit http://www.helpmegrowvt.org/families.html to learn more about child development, discover developmental tips, and watch videos demonstrating childrenâs developmental milestones (click a button to choose your childâs age). National Domestic Violence Hotlines Programs that provide immedi- ate assistance for women and men who have experienced domestic abuse which may include steps to ensure the personâs safety; short- term emotional support; assistance with shelter; legal information and advocacy; referrals for medical treatment; ongoing | 2308.12284#121 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 122 | to ensure the personâs safety; short- term emotional support; assistance with shelter; legal information and advocacy; referrals for medical treatment; ongoing counseling and/or group support; and other related services. Hotline . . . . . . . . RP- 1500.1400-200) www.thehotline.org/ Toll Free Phone: 800-799-SAFE URL: https://www.thehotline.org/ Eligibility: Anyone affected by rela- tionship abuse. Services Provided: Available 24/7/365 via phone, TTY, and chat. Provides lifesaving tools and immediate support to enable victims to find safety and live lives free of abuse. Highly trained, ex- perienced advocates offer support, crisis intervention, education, safety planning, and referral services. | 2308.12284#122 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 124 | SONET (Synchronous Optical NETwork) is a North American transmis- sion standard for optical communication systems. SDH (Synchronous Digital Hierarchy), a European transmission standard, is a minor variant of SONET. SONET defines a hierarchy of electrical signals referred to as Synchronous Transport Signals (STS). The STS hierarchy is built upon a basic signal . . . . . . . . the corresponding row and column numbers may include up to 18 comparison operations, which are onerous to implement, for example, in terms of the required logic circuitry. This problem is exacerbated at the upper levels of the STS hierarchy, where processing of multiple pointer values per data frame is performed. US20080109728A1 - Methods and Systems for Effecting Video Transi- tions Represented By Bitmaps - Google Patents Methods and Systems for Effecting Video Transitions Represented By Bitmaps Download PDF David Maymudes Multi-media project editing methods and systems are described. In one embodiment, a project editing system comprises a . multi-media editing application that is configured to . synchronization models for multimedia data US20120206653A1 (en) 2012-08-16 Efficient Media Processing US6658477B1 | 2308.12284#124 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.12284 | 125 | that is configured to . synchronization models for multimedia data US20120206653A1 (en) 2012-08-16 Efficient Media Processing US6658477B1 (en) 2003-12-02 Improving the control of streaming data through multiple processing modules US6212574B1 (en) 2001-04-03 User mode proxy of kernel mode operations in a computer operating system US7752548B2 (en) 2010-07-06 Features such as titles, transitions, and/or effects which vary according to positions Both the Ethernet II and IEEE 802.3 standards define the minimum frame size as 64 bytes and the maximum as 1518 bytes. This includes all bytes from the Destination MAC Address field through the Frame Check Sequence (FCS) field. The Preamble and Start Frame Delimiter fields are not included when . . . . . . . . frame. Dropped frames are likely to be the result of collisions or other unwanted signals and are therefore considered invalid. At the data link layer the frame structure is nearly identical. At the physical layer different versions of Ethernet vary in their method for detecting and placing data on the media. A byte is a group of bits, usually eight. As memory capacities increase, the capacity of chip cards is often quoted | 2308.12284#125 | D4: Improving LLM Pretraining via Document De-Duplication and Diversification | Over recent years, an increasing amount of compute and data has been poured
into training large language models (LLMs), usually by doing one-pass learning
on as many tokens as possible randomly selected from large-scale web corpora.
While training on ever-larger portions of the internet leads to consistent
performance improvements, the size of these improvements diminishes with scale,
and there has been little work exploring the effect of data selection on
pre-training and downstream performance beyond simple de-duplication methods
such as MinHash. Here, we show that careful data selection (on top of
de-duplicated data) via pre-trained model embeddings can speed up training (20%
efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up
to 2%) at the 6.7B model scale. Furthermore, we show that repeating data
intelligently consistently outperforms baseline training (while repeating
random data performs worse than baseline training). Our results indicate that
clever data selection can significantly improve LLM pre-training, calls into
question the common practice of training for a single epoch on as much data as
possible, and demonstrates a path to keep improving our models past the limits
of randomly sampling web data. | http://arxiv.org/pdf/2308.12284 | Kushal Tirumala, Daniel Simig, Armen Aghajanyan, Ari S. Morcos | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20230823 | 20230823 | [
{
"id": "2006.05929"
},
{
"id": "2208.07339"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2110.05922"
},
{
"id": "1809.02789"
},
{
"id": "1910.00762"
},
{
"id": "2201.11990"
},
{
"id": "2112.09118"
},
{
"id": "1708.00489"
},
{
"id": "2112.10684"
},
{
"id": "1606.08415"
},
{
"id": "1604.01696"
},
{
"id": "2304.01373"
},
{
"id": "1812.05159"
},
{
"id": "2305.13230"
},
{
"id": "1609.07843"
},
{
"id": "2304.15004"
},
{
"id": "1901.11409"
},
{
"id": "2004.10964"
}
] |
2308.11432 | 1 | # Abstract
Autonomous agents have long been a prominent research focus in both academic and industry communities. Previous research in this field often focuses on train- ing agents with limited knowledge within isolated environments, which diverges significantly from human learning processes, and thus makes the agents hard to achieve human-like decisions. Recently, through the acquisition of vast amounts of web knowledge, large language models (LLMs) have demonstrated remarkable potential in achieving human-level intelligence. This has sparked an upsurge in studies investigating LLM-based autonomous agents. In this paper, we present a comprehensive survey of these studies, delivering a systematic review of the field of LLM-based autonomous agents from a holistic perspective. More specif- ically, we first discuss the construction of LLM-based autonomous agents, for which we propose a unified framework that encompasses a majority of the previous work. Then, we present a comprehensive overview of the diverse applications of LLM-based autonomous agents in the fields of social science, natural science, and engineering. Finally, we delve into the evaluation strategies commonly used for LLM-based autonomous agents. Based on the previous studies, we also present several challenges and future directions in this field. To keep track of this field and continuously update our survey, we maintain a repository of relevant references at https://github.com/Paitesanshi/LLM-Agent-Survey.
# Introduction | 2308.11432#1 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 2 | # Introduction
âAn autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future.â
Franklin and Graesser (1997)
Autonomous agents have long been recognized as a promising approach to achieving artificial general intelligence (AGI), which is expected to accomplish tasks through self-directed planning and actions. In previous studies, the agents are assumed to act based on simple and heuristic policy functions, and learned in isolated and restricted environments [113, 96, 134, 60, 11, 127]. Such assumptions significantly differs from the human learning process, since the human mind is highly complex, and individuals can learn from a much wider variety of environments. Because of these gaps, the agents obtained from the previous studies are usually far from replicating human-level decision processes, especially in unconstrained, open-domain settings.
These authors contribute equally to this paper.
Preprint. Under review. | 2308.11432#2 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 3 | These authors contribute equally to this paper.
Preprint. Under review.
160 General Agent a 6. Voyager 2023-5 MIND2WEB 2023-6 140 Tool Agent Simulation Agent 120 100 (aT 2023-5 Game Agent 80 Web Agent Embodied Agent | 60 Tot 2003-5 40 a Number of Papers (cumulated ) 20 âAgen 2003-4 2021-1 2022-1 2023-2 2023-4 2023-6 Time ( Year-Month )
Figure 1: Illustration of the growth trend in the field of LLM-based autonomous agents. We present the cumulative number of papers published from January 2021 to August 2023. We assign different colors to represent various agent categories. For example, a game agent aims to simulate a game- player, while a tool agent mainly focuses on tool using. For each time period, we provide a curated list of studies with diverse agent categories. | 2308.11432#3 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 4 | In recent years, large language models (LLMs) have achieved notable successes, demonstrating significant potential in attaining human-like intelligence [120, 127, 11, 4, 146, 147]. This capability arises from leveraging comprehensive training datasets alongside a substantial number of model parameters. Building upon this capability, there has been a growing research area that employs LLMs as central controllers to construct autonomous agents to obtain human-like decision-making capabilities [21, 139, 138, 126, 133, 184, 136]. Along this direction, researchers have developed numerous promising models (see Figure 1 for an overview of this field), where the key idea is to equip LLMs with crucial human capabilities like memory and planning to make them behave like humans and complete various tasks effectively. Previously, these models were proposed independently, with limited efforts made to summarize and compare them holistically. However, we believe a systematic summary on this rapidly developing field is of great significance to comprehensively understand it and benefit to inspire future research. | 2308.11432#4 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 5 | In this paper, we conduct a comprehensive survey of the field of LLM-based autonomous agents. Specifically, we organize our survey based on three aspects including the construction, application, and evaluation of LLM-based autonomous agents. For the agent construction, we focus on two problems, that is, (1) how to design the agent architecture to better leverage LLMs, and (2) how to inspire and enhance the agent capability to complete different tasks. Intuitively, the first problem aims to build the hardware fundamentals for the agent, while the second problem focus on providing the agent with software resources. For the first problem, we present a unified agent framework, which can encompass most of the previous studies. For the second problem, we provide a summary on the commonly-used strategies for agentsâ capability acquisition. In addition to discussing agent construction, we also provide an overview of the applications of LLM-based autonomous agents in social science, natural science, and engineering. Finally, we delve into the strategies for evaluating LLM-based autonomous agents, focusing on both subjective and objective strategies. | 2308.11432#5 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 6 | In summary, this survey conducts a systematic review and establishes comprehensive taxonomies for existing studies in the field of LLM-based autonomous agents. We focus on three aspects: agent construction, application, and evaluation. Drawing from previous studies, we identify var- ious challenges in this field and discuss potential future directions. We believe that this field is still in its early stages; hence, we maintain a repository to keep track of ongoing studies at https://github.com/Paitesanshi/LLM-Agent-Survey. We expect that our survey can provide newcom- ers to the field of LLM-based autonomous agents with a comprehensive background knowledge, and also encourage further groundbreaking studies.
2
# 2 LLM-based Autonomous Agent Construction | 2308.11432#6 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 7 | 2
# 2 LLM-based Autonomous Agent Construction
LLM-based autonomous agents are expected to effectively perform diverse tasks by leveraging the human-like capabilities of LLMs. In order to achieve this goal, there are two significant aspects, that is, (1) which architecture should be designed to better use LLMs and (2) give the designed architecture, how to enable the agent to acquire capabilities for accomplishing specific tasks. Within the context of architecture design, we contribute a systematic synthesis of existing research, culminating in a comprehensive unified framework*. As for the second aspect, we summarize the strategies for agent capability acquisition based on whether they fine-tune the LLMs. When comparing LLM-based autonomous agents to traditional machine learning, designing the agent architecture is analogous to determining the network structure, while the agent capability acquisition is similar to learning the network parameters. In the following, we introduce these two aspects more in detail.
# 2.1 Agent Architecture Design | 2308.11432#7 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 8 | # 2.1 Agent Architecture Design
Recent advancements in LLMs have demonstrated their great potential to accomplish a wide range of tasks in the form of question-answering (QA). However, building autonomous agents is far from QA, since they need to fulfill specific roles and autonomously perceive and learn from the environment to evolve themselves like humans. To bridge the gap between traditional LLMs and autonomous agents, a crucial aspect is to design rational agent architectures to assist LLMs in maximizing their capabilities. Along this direction, previous work has developed a number of modules to enhance LLMs. In this section, we propose a unified framework to summarize these modules. In specific, the overall structure of our framework is illustrated Figure 2, which is composed of a profiling module, a memory module, a planning module, and an action module. The purpose of the profiling module is to identify the role of the agent. The memory and planning modules place the agent into a dynamic environment, enabling it to recall past behaviors and plan future actions. The action module is responsible for translating the agentâs decisions into specific outputs. Within these modules, the profiling module impacts the memory and planning modules, and collectively, these three modules influence the action module. In the following, we detail these modules.
# 2.1.1 Profiling Module | 2308.11432#8 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 9 | # 2.1.1 Profiling Module
Autonomous agents typically perform tasks by assuming specific roles, such as coders, teachers and domain experts [124, 39]. The profiling module aims to indicate the profiles of the agent roles, which are usually written into the prompt to influence the LLM behaviors. Agent profiles typically encompass basic information such as age, gender, and career [121], as well as psychology information, reflecting the personalities of the agents [149], and social information, detailing the relationships between agents [149]. The choice of information to profile the agent is largely determined by the specific application scenarios. For instance, if the application aims to study human cognitive process, then the psychology information becomes pivotal. After identifying the types of profile information, the next important problem is to create specific profiles for the agents. Existing literature commonly employs the following three strategies. | 2308.11432#9 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 10 | Handcrafting Method: in this method, agent profiles are manually specified. For instance, if one would like to design agents with different personalities, he can use "you are an outgoing person" or "you are an introverted person" to profile the agent. The handcrafting method has been leveraged in a lot of previous work to indicate the agent profiles. For example, Generative Agent [176] describes the agent by the information like name, objectives, and relationships with other agents. MetaGPT [64], ChatDev [124], and Self-collaboration [33] predefine various roles and their corresponding responsibilities in software development, manually assigning distinct profiles to each agent to facilitate collaboration. PTLLM [131] aims to explore and quantify personality traits displayed in texts generated by LLMs. This method guides LLMs in generating diverse responses by manfully defining various agent characters through the use of personality assessment tools such as IPIP-NEO [77] and BFI [76]. [31] studies the toxicity of the LLM output by manually prompting LLMs with different roles, such as politicians, journalists and businesspersons. In general, the handcrafting method is very flexible, since one can assign any profile information to the agents. However, it can be also labor-intensive, particularly when dealing with a large number of agents. | 2308.11432#10 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 11 | *Our framework is also inspired by a pioneer work at https://lilianweng.github.io/posts/2023-06-23-agent/
3
i âtm = Profile Contents Memory Structure Planning w/o Feedback Action Target . " " > Unified Memory » TaskCompletion > Exploration > Caiagepilte Hitmen > Hybrid Memory > Single-path Reasoning > Communication > Personality Information . . i i * â > Multi-path Reasoning Action Production > Social Information _ Gaye > External Planner > Memory Recollection ; > Languages > Databases > PlanFollowing Generation Strategy 2 Gass 2 WES Planning w/ Feedback Action Space > Handcrafting Method Memory Operation > GahenmennGex bt > Tools > Self-Knowledge > LLM-Generation Method > (ManrasMecclitg) > Human Feedback Action Impact > Memory Writing > Dataset Alignment Method > Memory Reflection > Environments > New Actions > (itatalineze leer > Internal States X y}
Figure 2: A unified framework for the architecture design of LLM-based autonomous agent. | 2308.11432#11 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 12 | Figure 2: A unified framework for the architecture design of LLM-based autonomous agent.
LLM-generation Method: in this method, agent profiles are automatically generated based on LLMs. Typically, it begins by indicating the profile generation rules, elucidating the composition and attributes of the agent profiles within the target population. Then, one can optionally specify several seed agent profiles to serve as few-shot examples. At last, LLMs are leveraged to generate all the agent profiles. For example, RecAgent [150] first creates seed profiles for a few number of agents by manually crafting their backgrounds like age, gender, personal traits, and movie preferences. Then, it leverages ChatGPT to generate more agent profiles based on the seed information. The LLM-generation method can save significant time when the number of agents is large, but it may lack precise control over the generated profiles. | 2308.11432#12 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 13 | Dataset Alignment Method: in this method, the agent profiles are obtained from real-world datasets. Typically, one can first organize the information about real humans in the datasets into natural language prompts, and then leverage it to profile the agents. For instance, in [5], the authors assign roles to GPT-3 based on the demographic backgrounds (such as race/ethnicity, gender, age, and state of residence) of participants in the American National Election Studies (ANES). They subsequently investigate whether GPT-3 can produce similar results to those of real humans. The dataset alignment method accurately captures the attributes of the real population, thereby making the agent behaviors more meaningful and reflective of real-world scenarios. Remark. While most of the previous work leverage the above profile generation strategies indepen- dently, we argue that combining them may yield additional benefits. For example, in order to predict social developments via agent simulation, one can leverage real-world datasets to profile a subset of the agents, thereby accurately reflecting the current social status. Subsequently, roles that do not exist in the real world but may emerge in the future can be manually assigned to the other agents, enabling the prediction of future social development. The profile module serves as the foundation for agent design, exerting significant influence on the agent memorization, planning, and action procedures.
# 2.1.2 Memory Module | 2308.11432#13 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 14 | # 2.1.2 Memory Module
The memory module plays a very important role in the agent architecture design. It stores information perceived from the environment and leverages the recorded memories to facilitate future actions. The memory module can help the agent to accumulate experiences, self-evolve, and behave in a more consistent, reasonable, and effective manner. This section provides a comprehensive overview of the memory module, focusing on its structures, formats, and operations.
Memory Structures: LLM-based autonomous agents usually incorporate principles and mechanisms derived from cognitive science research on human memory processes. Human memory follows a general progression from sensory memory that registers perceptual inputs, to short-term memory that maintains information transiently, to long-term memory that consolidates information over extended periods. When designing the agent memory structures, researchers take inspiration from these aspects of human memory. In specific, short-term memory is analogous to the input information within
4
the context window constrained by the transformer architecture. Long-term memory resembles the external vector storage that agents can rapidly query and retrieve from as needed. In the following, we introduce two commonly used memory structures based on the short- and long-term memories. | 2308.11432#14 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 15 | ⢠Unified Memory. This structure only simulates the human shot-term memory, which is usually realized by in-context learning, and the memory information is directly written into the prompts. For example, RLP [54] is a conversation agent, which maintains internal states for the speaker and listener. During each round of conversation, these states serve as LLM prompts, functioning as the agentâs short-term memory. SayPlan [129] is an embodied agent specifically designed for task planning. In this agent, the scene graphs and environment feedback serve as the agentâs short-term memory, guiding its actions. CALYPSO [183] is an agent designed for the game Dungeons & Dragons, which can assist Dungeon Masters in the creation and narration of stories. Its short-term memory is built upon scene descriptions, monster information, and previous summaries. DEPS [154] is also a game agent, but it is developed for Minecraft. The agent initially generates task plans and then utilizes them to prompt LLMs, which in turn produce actions to complete the task. These plans can be deemed as the agentâs short-term memory. In practice, implementing short-term memory is straightforward and can enhance an agentâs ability to perceive recent or contextually sensitive behaviors and observations. | 2308.11432#15 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 16 | ⢠Hybrid Memory. This structure explicitly models the human short-term and long-term memories. The short-term memory temporarily buffers recent perceptions, while long-term memory consolidates important information over time. For instance, Generative Agent [121] employs a hybrid memory structure to facilitate agent behaviors. The short-term memory contains the context information about the agent current situations, while the long-term memory stores the agent past behaviors and thoughts, which can be retrieved according to the current events. AgentSims [99] also implements a hybrid memory architecture. The information provided in the prompt can be considered as short-term memory. In order to enhance the storage capacity of memory, the authors propose a long-term memory system that utilizes a vector database, facilitating efficient storage and retrieval. Specifically, the agentâs daily memories are encoded as embeddings and stored in the vector database. If the agent needs to recall its previous memories, the long-term memory system retrieves relevant information using embedding similarities. This process can improve the consistency of the agentâs behavior. In GITM [184], the short-term memory stores the current trajectory, and the long-term memory saves reference plans summarized from | 2308.11432#16 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 17 | agentâs behavior. In GITM [184], the short-term memory stores the current trajectory, and the long-term memory saves reference plans summarized from successful prior trajectories. Long-term memory provides stable knowledge, while short-term memory allows flexible planning. Reflexion [139] utilizes a short-term sliding window to capture recent feedback and incorporates persistent long- term storage to retain condensed insights. This combination allows for the utilization of both detailed immediate experiences and high-level abstractions. SCM [92] selectively activates the most relevant long-term knowledge to combine with short-term memory, enabling reasoning over complex contextual dialogues. SimplyRetrieve [117] utilizes user queries as short-term memory and stores long-term memory using external knowledge bases. This design enhances the model accuracy while guaranteeing user privacy. MemorySandbox [72] implements long-term and short-term memory by utilizing a 2D canvas to store memory objects, which can then be accessed throughout various conversations. Users can create multiple conversations with different agents on the same canvas, facilitating the sharing of memory objects through a simple drag-and-drop interface. In practice, integrating both short-term | 2308.11432#17 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 18 | multiple conversations with different agents on the same canvas, facilitating the sharing of memory objects through a simple drag-and-drop interface. In practice, integrating both short-term and long-term memories can enhance an agentâs ability for long-range reasoning and accumulation of valuable experiences, which are crucial for accomplishing tasks in complex environments. Remark. Careful readers may find that there may also exist another type of memory structure, that is, only based on the long-term memory. However, we find that such type of memory is rarely documented in the literature. Our speculation is that the agents are always situated in continuous and dynamic environments, with consecutive actions displaying a high correlation. Therefore, the capture of short-term memory is very important and usually cannot be disregarded. | 2308.11432#18 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 19 | Memory Formats: In addition to the memory structure, another perspective to analyze the memory module is based on the formats of the memory storage medium, for example, natural language memory or embedding memory. Different memory formats possess distinct strengths and are suitable for various applications. In the following, we introduce several representative memory formats.
In this format, memory information such as the agent behaviors and observations are directly described using raw natural language. This format possesses several strengths. Firstly, the memory information can be expressed in a flexible and understandable manner. Moreover, it retains rich semantic information that can provide comprehensive signals to guide agent
5
behaviors. In the previous work, Reflexion [139] stores experiential feedback in natural language within a sliding window. Voyager [148] employs natural language descriptions to represent skills within the Minecraft game, which are directly stored in memory.
⢠Embeddings. In this format, memory information is encoded into embedding vectors, which can enhance the memory retrieval and reading efficiency. For instance, MemoryBank [179] encodes each memory segment into an embedding vector, which creates an indexed corpus for retrieval. GITM [184] represents reference plans as embeddings to facilitate matching and reuse. Furthermore, ChatDev [124] encodes dialogue history into vectors for retrieval. | 2308.11432#19 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 20 | ⢠Databases. In this format, memory information is stored in databases, allowing the agent to manipulate memories efficiently and comprehensively. For example, ChatDB [67] uses a database as a symbolic memory module. The agent can utilize SQL statements to precisely add, delete, and revise the memory information. In DB-GPT [182], the memory module is constructed based on a database. To more intuitively operate the memory information, the agents are fine-tuned to understand and execute SQL queries, enabling them to interact with databases using natural language directly. | 2308.11432#20 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 21 | Structured Lists. In this format, memory information is organized into lists, and the semantic of memory can be conveyed in an efficient and concise manner. For instance, GITM [184] stores action lists for sub-goals in a hierarchical tree structure. The hierarchical structure explicitly captures the relationships between goals and corresponding plans. RET-LLM [114] initially converts natural language sentences into triplet phrases, and subsequently stores them in memory. Remark. Here we only show several representative memory formats, but it is important to note that there are many uncovered ones, such as the programming code used by [148]. Moreover, it should be emphasized that these formats are not mutually exclusive; many models incorporate multiple formats to concurrently harness their respective benefits. A notable example is the memory module of GITM [184], which utilizes a key-value list structure. In this structure, the keys are represented by embedding vectors, while the values consist of raw natural languages. The use of embedding vectors allows for efficient retrieval of memory records. By utilizing natural languages, the memory contents become highly comprehensive, enabling more informed agent actions.
Above, we mainly discuss the internal designs of the memory module. In the following, we turn our focus to memory operations, which are used to interact with external environments. | 2308.11432#21 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 22 | Above, we mainly discuss the internal designs of the memory module. In the following, we turn our focus to memory operations, which are used to interact with external environments.
Memory Operations: The memory module plays a critical role in allowing the agent to acquire, accumulate, and utilize significant knowledge by interacting with the environment. The interaction between the agent and the environment is accomplished through three crucial memory operations: memory reading, memory writing, and memory reflection. In the following, we introduce these operations more in detail.
⢠Memory Reading. The objective of memory reading is to extract meaningful information from memory to enhance the agentâs actions. For example, using the previously successful actions to achieve similar goals [184]. The key of memory reading lies in how to extract valuable information. Usually, there three commonly used criteria for information extraction, that is, the recency, relevance, and importance [121]. Memories that are more recent, relevant, and important are more likely to be extracted. Formally, we conclude the following equation from existing literature for memory information extraction:
mâ = arg min mâM αsrec(q, m) + βsrel(q, m) + γsimp(m), (1) | 2308.11432#22 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
2308.11432 | 23 | mâ = arg min mâM αsrec(q, m) + βsrel(q, m) + γsimp(m), (1)
where q is the query, for example, the task that the agent should address or the context in which the agent is situated. M is the set of all memories. srec(·), srel(·) and simp(·) are the scoring functions for measuring the recency, relevance, and importance of the memory m. These scoring functions can be implemented using various methods, for example, srel(q, m) can be realized based on LSH, ANNOY, HNSW, FAISS and so onâ . It should be noted that simp only reflects the characters of the memory itself, thus it is unrelated to the query q. α, β and γ are balancing parameters. By assigning them with different values, one can obtain various memory reading strategies. For example, by setting α = γ = 0, many studies [114, 184, 148, 54] only consider the relevance score srel for memory reading. By assigning α = β = γ = 1.0, [121] equally weights all the above three metrics to extract information from the memory.
# â https://lilianweng.github.io/posts/2023-06-23-agent/
6 | 2308.11432#23 | A Survey on Large Language Model based Autonomous Agents | Autonomous agents have long been a prominent research focus in both academic
and industry communities. Previous research in this field often focuses on
training agents with limited knowledge within isolated environments, which
diverges significantly from human learning processes, and thus makes the agents
hard to achieve human-like decisions. Recently, through the acquisition of vast
amounts of web knowledge, large language models (LLMs) have demonstrated
remarkable potential in achieving human-level intelligence. This has sparked an
upsurge in studies investigating LLM-based autonomous agents. In this paper, we
present a comprehensive survey of these studies, delivering a systematic review
of the field of LLM-based autonomous agents from a holistic perspective. More
specifically, we first discuss the construction of LLM-based autonomous agents,
for which we propose a unified framework that encompasses a majority of the
previous work. Then, we present a comprehensive overview of the diverse
applications of LLM-based autonomous agents in the fields of social science,
natural science, and engineering. Finally, we delve into the evaluation
strategies commonly used for LLM-based autonomous agents. Based on the previous
studies, we also present several challenges and future directions in this
field. To keep track of this field and continuously update our survey, we
maintain a repository of relevant references at
https://github.com/Paitesanshi/LLM-Agent-Survey. | http://arxiv.org/pdf/2308.11432 | Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen | cs.AI, cs.CL | 35 pages, 5 figures, 3 tables | null | cs.AI | 20230822 | 20230907 | [
{
"id": "2307.03109"
},
{
"id": "2210.04964"
},
{
"id": "2307.15810"
},
{
"id": "2307.02485"
},
{
"id": "2307.02502"
},
{
"id": "2307.00184"
},
{
"id": "2304.11477"
},
{
"id": "2306.03901"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2305.17390"
},
{
"id": "2308.08155"
},
{
"id": "2212.10403"
},
{
"id": "1801.01290"
},
{
"id": "2302.01560"
},
{
"id": "2308.07201"
},
{
"id": "2308.12033"
},
{
"id": "2308.07540"
},
{
"id": "2306.00924"
},
{
"id": "2305.12487"
},
{
"id": "2305.03514"
},
{
"id": "2212.09746"
},
{
"id": "2305.20076"
},
{
"id": "2308.03427"
},
{
"id": "2307.06135"
},
{
"id": "2308.02151"
},
{
"id": "2207.05608"
},
{
"id": "2306.07929"
},
{
"id": "2211.09935"
},
{
"id": "2302.04761"
},
{
"id": "2304.05376"
},
{
"id": "2305.11598"
},
{
"id": "2306.03604"
},
{
"id": "2307.01848"
},
{
"id": "2306.05152"
},
{
"id": "2307.12573"
},
{
"id": "2308.16505"
},
{
"id": "2308.00245"
},
{
"id": "2308.06921"
},
{
"id": "2305.10601"
},
{
"id": "2306.06070"
},
{
"id": "2304.08244"
},
{
"id": "1509.02971"
},
{
"id": "2302.00763"
},
{
"id": "2304.05332"
},
{
"id": "2301.12868"
},
{
"id": "2308.03313"
},
{
"id": "2308.03656"
},
{
"id": "2305.17066"
},
{
"id": "2308.02773"
},
{
"id": "2303.11381"
},
{
"id": "2308.06782"
},
{
"id": "2308.09687"
},
{
"id": "2301.04589"
},
{
"id": "2308.01542"
},
{
"id": "2305.12647"
},
{
"id": "2308.03983"
},
{
"id": "2304.13712"
},
{
"id": "2307.07924"
},
{
"id": "2305.14279"
},
{
"id": "2305.14325"
},
{
"id": "2303.17580"
},
{
"id": "2306.16092"
},
{
"id": "2304.14354"
},
{
"id": "2305.16960"
},
{
"id": "2307.07871"
},
{
"id": "2302.13971"
},
{
"id": "2307.11760"
},
{
"id": "2112.09332"
},
{
"id": "2303.17491"
},
{
"id": "2307.06187"
},
{
"id": "2308.00352"
},
{
"id": "2308.00436"
},
{
"id": "2301.05327"
},
{
"id": "2307.14984"
},
{
"id": "2304.04370"
},
{
"id": "2305.14938"
},
{
"id": "2307.10337"
},
{
"id": "2308.04026"
},
{
"id": "2308.03688"
},
{
"id": "2305.14323"
},
{
"id": "2308.01423"
},
{
"id": "2307.04738"
},
{
"id": "2304.10750"
},
{
"id": "2301.12050"
},
{
"id": "2304.08354"
},
{
"id": "2305.16291"
},
{
"id": "2305.18323"
},
{
"id": "2305.14322"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2303.18223"
},
{
"id": "2205.00445"
},
{
"id": "2307.16789"
},
{
"id": "2204.01691"
},
{
"id": "2308.06391"
},
{
"id": "2308.02439"
},
{
"id": "2302.07842"
},
{
"id": "2212.04088"
},
{
"id": "2305.13455"
},
{
"id": "2307.12966"
},
{
"id": "2305.10626"
},
{
"id": "1707.06347"
},
{
"id": "2307.13854"
},
{
"id": "2304.13343"
},
{
"id": "2302.02676"
},
{
"id": "2306.09299"
},
{
"id": "2305.14992"
},
{
"id": "2305.02412"
},
{
"id": "2308.10379"
},
{
"id": "2306.02552"
},
{
"id": "2304.05335"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2308.05481"
},
{
"id": "2308.04030"
},
{
"id": "2305.17144"
},
{
"id": "2307.15833"
},
{
"id": "2305.08144"
},
{
"id": "2303.11504"
},
{
"id": "2005.14165"
},
{
"id": "2305.05658"
},
{
"id": "2308.01552"
},
{
"id": "2305.10250"
},
{
"id": "2308.05960"
},
{
"id": "2307.04986"
},
{
"id": "2306.00739"
},
{
"id": "2305.16867"
},
{
"id": "2305.15334"
},
{
"id": "2308.09904"
},
{
"id": "2303.11366"
},
{
"id": "2308.14296"
},
{
"id": "2303.17651"
},
{
"id": "2304.14721"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.