id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.07864#218
The Rise and Potential of Large Language Model Based Agents: A Survey
Images speak in images: A generalist painter for in-context visual learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pages 6830â 6839. IEEE, 2023. [194] Wang, C., S. Chen, Y. Wu, et al. Neural codec language models are zero-shot text to speech synthesizers. CoRR, abs/2301.02111, 2023. [195] Dong, Q., L. Li, D. Dai, et al. A survey for in-context learning. CoRR, abs/2301.00234, 2023. [196] Ke, Z., B. Liu.
2309.07864#217
2309.07864#219
2309.07864
[ "2305.08982" ]
2309.07864#219
The Rise and Potential of Large Language Model Based Agents: A Survey
Continual learning of natural language processing tasks: A survey. ArXiv, abs/2211.12701, 2022. [197] Wang, L., X. Zhang, H. Su, et al. A comprehensive survey of continual learning: Theory, method and application. ArXiv, abs/2302.00487, 2023. [198] Razdaibiedina, A., Y. Mao, R. Hou, et al. Progressive prompts: Continual learning for language models. In The Eleventh International Conference on Learning Representations. 2023. [199] Marshall, L. H., H. W. Magoun.
2309.07864#218
2309.07864#220
2309.07864
[ "2305.08982" ]
2309.07864#220
The Rise and Potential of Large Language Model Based Agents: A Survey
Discoveries in the human brain: neuroscience prehistory, brain structure, and function. Springer Science & Business Media, 2013. [200] Searle, J. R. What is language: some preliminary remarks. Explorations in Pragmatics. Linguistic, cognitive and intercultural aspects, pages 7â 37, 2007. [201] Touvron, H., T. Lavril, G. Izacard, et al.
2309.07864#219
2309.07864#221
2309.07864
[ "2305.08982" ]
2309.07864#221
The Rise and Potential of Large Language Model Based Agents: A Survey
Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023. [202] Scao, T. L., A. Fan, C. Akiki, et al. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022. [203] Almazrouei, E., H. Alobeidli, A. Alshamsi, et al.
2309.07864#220
2309.07864#222
2309.07864
[ "2305.08982" ]
2309.07864#222
The Rise and Potential of Large Language Model Based Agents: A Survey
Falcon-40b: an open large language model with state-of-the-art performance, 2023. [204] Serban, I. V., R. Lowe, L. Charlin, et al. Generative deep neural networks for dialogue: A short review. CoRR, abs/1611.06216, 2016. [205] Vinyals, O., Q. V. Le. A neural conversational model. CoRR, abs/1506.05869, 2015.
2309.07864#221
2309.07864#223
2309.07864
[ "2305.08982" ]
2309.07864#223
The Rise and Potential of Large Language Model Based Agents: A Survey
59 [206] Adiwardana, D., M. Luong, D. R. So, et al. Towards a human-like open-domain chatbot. CoRR, abs/2001.09977, 2020. [207] Zhuge, M., H. Liu, F. Faccio, et al. Mindstorms in natural language-based societies of mind. CoRR, abs/2305.17066, 2023. [208] Roller, S., E. Dinan, N. Goyal, et al.
2309.07864#222
2309.07864#224
2309.07864
[ "2305.08982" ]
2309.07864#224
The Rise and Potential of Large Language Model Based Agents: A Survey
Recipes for building an open-domain chatbot. In P. Merlo, J. Tiedemann, R. Tsarfaty, eds., Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 300â 325. Association for Computational Linguistics, 2021. [209] Taori, R., I. Gulrajani, T. Zhang, et al. Stanford alpaca: An instruction-following llama model, 2023. [210] Raffel, C., N. Shazeer, A. Roberts, et al.
2309.07864#223
2309.07864#225
2309.07864
[ "2305.08982" ]
2309.07864#225
The Rise and Potential of Large Language Model Based Agents: A Survey
Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â 5551, 2020. [211] Ge, Y., W. Hua, J. Ji, et al. Openagi: When LLM meets domain experts. CoRR, abs/2304.04370, 2023. [212] Rajpurkar, P., J. Zhang, K. Lopyrev, et al. Squad: 100, 000+ questions for machine com- prehension of text. In J. Su, X. Carreras, K. Duh, eds., Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383â 2392. The Association for Computational Linguistics, 2016. [213] Ahuja, K., R. Hada, M. Ochieng, et al.
2309.07864#224
2309.07864#226
2309.07864
[ "2305.08982" ]
2309.07864#226
The Rise and Potential of Large Language Model Based Agents: A Survey
MEGA: multilingual evaluation of generative AI. CoRR, abs/2303.12528, 2023. [214] See, A., A. Pappu, R. Saxena, et al. Do massively pretrained language models make better storytellers? In M. Bansal, A. Villavicencio, eds., Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019, pages 843â 861. Association for Computational Linguistics, 2019. [215] Radford, A., J. Wu, D. Amodei, et al. Better language models and their implications. OpenAI blog, 1(2), 2019. [216] McCoy, R. T., P. Smolensky, T. Linzen, et al. How much do language models copy from their training data? evaluating linguistic novelty in text generation using RAVEN. CoRR, abs/2111.09509, 2021. [217] Tellex, S., T. Kollar, S. Dickerson, et al.
2309.07864#225
2309.07864#227
2309.07864
[ "2305.08982" ]
2309.07864#227
The Rise and Potential of Large Language Model Based Agents: A Survey
Understanding natural language commands for robotic navigation and mobile manipulation. In W. Burgard, D. Roth, eds., Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Francisco, California, USA, August 7-11, 2011, pages 1507â 1514. AAAI Press, 2011. [218] Christiano, P. F., J. Leike, T. B. Brown, et al. Deep reinforcement learning from human preferences. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett, eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4299â 4307. 2017. [219] Basu, C., M. Singhal, A. D.
2309.07864#226
2309.07864#228
2309.07864
[ "2305.08982" ]
2309.07864#228
The Rise and Potential of Large Language Model Based Agents: A Survey
Dragan. Learning from richer human guidance: Augmenting comparison-based learning with feature queries. In T. Kanda, S. Sabanovic, G. Hoffman, A. Tapus, eds., Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2018, Chicago, IL, USA, March 05-08, 2018, pages 132â 140. ACM, 2018. [220] Sumers, T. R., M. K. Ho, R. X. D. Hawkins, et al.
2309.07864#227
2309.07864#229
2309.07864
[ "2305.08982" ]
2309.07864#229
The Rise and Potential of Large Language Model Based Agents: A Survey
Learning rewards from linguistic feedback. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 6002â 6010. AAAI Press, 2021. 60 [221] Jeon, H. J., S. Milli, A. D. Dragan. Reward-rational (implicit) choice: A unifying formalism for reward learning.
2309.07864#228
2309.07864#230
2309.07864
[ "2305.08982" ]
2309.07864#230
The Rise and Potential of Large Language Model Based Agents: A Survey
In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin, eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. 2020. [222] McShane, M. Reference resolution challenges for intelligent agents: The need for knowledge. IEEE Intell. Syst., 24(4):47â 58, 2009. [223] Gururangan, S., A. Marasovic, S. Swayamdipta, et al. Donâ
2309.07864#229
2309.07864#231
2309.07864
[ "2305.08982" ]
2309.07864#231
The Rise and Potential of Large Language Model Based Agents: A Survey
t stop pretraining: Adapt language In D. Jurafsky, J. Chai, N. Schluter, J. R. Tetreault, eds., models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342â 8360. Association for Computational Linguistics, 2020. [224] Shi, F., X. Chen, K. Misra, et al.
2309.07864#230
2309.07864#232
2309.07864
[ "2305.08982" ]
2309.07864#232
The Rise and Potential of Large Language Model Based Agents: A Survey
Large language models can be easily distracted by irrelevant context. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 31210â 31227. PMLR, 2023. [225] Zhang, Y., Y. Li, L. Cui, et al.
2309.07864#231
2309.07864#233
2309.07864
[ "2305.08982" ]
2309.07864#233
The Rise and Potential of Large Language Model Based Agents: A Survey
Sirenâ s song in the AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219, 2023. [226] Mialon, G., R. Dessì, M. Lomeli, et al. Augmented language models: a survey. CoRR, abs/2302.07842, 2023. [227] Ren, R., Y. Wang, Y. Qu, et al. Investigating the factual knowledge boundary of large language models with retrieval augmentation. CoRR, abs/2307.11019, 2023. [228] Nuxoll, A. M., J. E. Laird.
2309.07864#232
2309.07864#234
2309.07864
[ "2305.08982" ]
2309.07864#234
The Rise and Potential of Large Language Model Based Agents: A Survey
Extending cognitive architecture with episodic memory. In AAAI, pages 1560â 1564. 2007. [229] Squire, L. R. Mechanisms of memory. Science, 232(4758):1612â 1619, 1986. [230] Schwabe, L., K. Nader, J. C. Pruessner. Reconsolidation of human memory: brain mechanisms and clinical relevance. Biological psychiatry, 76(4):274â 280, 2014. [231] Hutter, M.
2309.07864#233
2309.07864#235
2309.07864
[ "2305.08982" ]
2309.07864#235
The Rise and Potential of Large Language Model Based Agents: A Survey
A theory of universal artificial intelligence based on algorithmic complexity. arXiv preprint cs/0004001, 2000. [232] Zhang, X., F. Wei, M. Zhou. HIBERT: document level pre-training of hierarchical bidirectional transformers for document summarization. In A. Korhonen, D. R. Traum, L. Màrquez, eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5059â 5069. Association for Computational Linguistics, 2019. [233] Mohtashami, A., M.
2309.07864#234
2309.07864#236
2309.07864
[ "2305.08982" ]
2309.07864#236
The Rise and Potential of Large Language Model Based Agents: A Survey
Jaggi. Landmark attention: Random-access infinite context length for transformers. CoRR, abs/2305.16300, 2023. [234] Chalkidis, I., X. Dai, M. Fergadiotis, et al. An exploration of hierarchical attention transformers for efficient long document classification. CoRR, abs/2210.05529, 2022. [235] Nie, Y., H. Huang, W. Wei, et al. Capturing global structural information in long document question answering with compressive graph selector network. In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5036â 5047. Association for Computational Linguistics, 2022. [236] Bertsch, A., U. Alon, G. Neubig, et al. Unlimiformer: Long-range transformers with unlimited length input. CoRR, abs/2305.01625, 2023.
2309.07864#235
2309.07864#237
2309.07864
[ "2305.08982" ]
2309.07864#237
The Rise and Potential of Large Language Model Based Agents: A Survey
61 [237] Manakul, P., M. J. F. Gales. Sparsity and sentence structure in encoder-decoder attention of summarization systems. In M. Moens, X. Huang, L. Specia, S. W. Yih, eds., Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 9359â 9368. Association for Computational Linguistics, 2021. [238] Zaheer, M., G. Guruganesh, K. A. Dubey, et al. Big bird:
2309.07864#236
2309.07864#238
2309.07864
[ "2305.08982" ]
2309.07864#238
The Rise and Potential of Large Language Model Based Agents: A Survey
Transformers for longer sequences. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin, eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. 2020. [239] Zhao, A., D. Huang, Q. Xu, et al.
2309.07864#237
2309.07864#239
2309.07864
[ "2305.08982" ]
2309.07864#239
The Rise and Potential of Large Language Model Based Agents: A Survey
Expel: LLM agents are experiential learners. CoRR, abs/2308.10144, 2023. [240] Zhou, X., G. Li, Z. Liu. LLM as DBA. CoRR, abs/2308.05481, 2023. [241] Wason, P. C. Reasoning about a rule. Quarterly journal of experimental psychology, 20(3):273â 281, 1968. [242] Wason, P. C., P. N. Johnson-Laird. Psychology of reasoning:
2309.07864#238
2309.07864#240
2309.07864
[ "2305.08982" ]
2309.07864#240
The Rise and Potential of Large Language Model Based Agents: A Survey
Structure and content, vol. 86. Harvard University Press, 1972. [243] Galotti, K. M. Approaches to studying formal and everyday reasoning. Psychological bulletin, 105(3):331, 1989. [244] Huang, J., K. C. Chang. Towards reasoning in large language models: A survey. In A. Rogers, J. L. Boyd-Graber, N. Okazaki, eds., Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 1049â 1065. Association for Computational Linguistics, 2023. [245] Webb, T. W., K. J. Holyoak, H. Lu.
2309.07864#239
2309.07864#241
2309.07864
[ "2305.08982" ]
2309.07864#241
The Rise and Potential of Large Language Model Based Agents: A Survey
Emergent analogical reasoning in large language models. CoRR, abs/2212.09196, 2022. [246] Feng, G., B. Zhang, Y. Gu, et al. Towards revealing the mystery behind chain of thought: a theoretical perspective. CoRR, abs/2305.15408, 2023. [247] Grafman, J., L. Spector, M. J. Rattermann. Planning and the brain. In The cognitive psychology of planning, pages 191â 208. Psychology Press, 2004. [248] Unterrainer, J. M., A. M. Owen.
2309.07864#240
2309.07864#242
2309.07864
[ "2305.08982" ]
2309.07864#242
The Rise and Potential of Large Language Model Based Agents: A Survey
Planning and problem solving: from neuropsychology to functional neuroimaging. Journal of Physiology-Paris, 99(4-6):308â 317, 2006. [249] Zula, K. J., T. J. Chermack. Integrative literature review: Human capital planning: A review of literature and implications for human resource development. Human Resource Development Review, 6(3):245â 262, 2007. [250] Bratman, M. E., D. J. Israel, M. E. Pollack.
2309.07864#241
2309.07864#243
2309.07864
[ "2305.08982" ]
2309.07864#243
The Rise and Potential of Large Language Model Based Agents: A Survey
Plans and resource-bounded practical reasoning. Computational intelligence, 4(3):349â 355, 1988. [251] Russell, S., P. Norvig. Artificial intelligence - a modern approach, 2nd Edition. Prentice Hall series in artificial intelligence. Prentice Hall, 2003. [252] Fainstein, S. S., J. DeFilippis. Readings in planning theory. John Wiley & Sons, 2015. [253] Sebastia, L., E. Onaindia, E. Marzal.
2309.07864#242
2309.07864#244
2309.07864
[ "2305.08982" ]
2309.07864#244
The Rise and Potential of Large Language Model Based Agents: A Survey
Decomposition of planning problems. Ai Communica- tions, 19(1):49â 81, 2006. [254] Crosby, M., M. Rovatsos, R. Petrick. Automated agent decomposition for classical planning. In Proceedings of the International Conference on Automated Planning and Scheduling, vol. 23, pages 46â 54. 2013. [255] Xu, B., Z. Peng, B. Lei, et al. Rewoo: Decoupling reasoning from observations for efficient augmented language models. CoRR, abs/2305.18323, 2023.
2309.07864#243
2309.07864#245
2309.07864
[ "2305.08982" ]
2309.07864#245
The Rise and Potential of Large Language Model Based Agents: A Survey
62 [256] Raman, S. S., V. Cohen, E. Rosen, et al. Planning with large language models via corrective re-prompting. CoRR, abs/2211.09935, 2022. [257] Lyu, Q., S. Havaldar, A. Stein, et al. Faithful chain-of-thought reasoning. CoRR, abs/2301.13379, 2023. [258] Huang, W., P. Abbeel, D. Pathak, et al.
2309.07864#244
2309.07864#246
2309.07864
[ "2305.08982" ]
2309.07864#246
The Rise and Potential of Large Language Model Based Agents: A Survey
Language models as zero-shot planners: Extracting ac- tionable knowledge for embodied agents. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, S. Sabato, eds., International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, vol. 162 of Proceedings of Machine Learning Research, pages 9118â 9147. PMLR, 2022. [259] Dagan, G., F. Keller, A. Lascarides. Dynamic planning with a LLM. CoRR, abs/2308.06391, 2023. [260] Rana, K., J. Haviland, S. Garg, et al. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. CoRR, abs/2307.06135, 2023. [261] Peters, M. E., M. Neumann, M. Iyyer, et al. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227â 2237. Association for Computational Linguistics, New Orleans, Louisiana, 2018. [262] Devlin, J., M. Chang, K. Lee, et al.
2309.07864#245
2309.07864#247
2309.07864
[ "2305.08982" ]
2309.07864#247
The Rise and Potential of Large Language Model Based Agents: A Survey
BERT: pre-training of deep bidirectional transformers for language understanding. In J. Burstein, C. Doran, T. Solorio, eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171â 4186. Association for Computational Linguistics, 2019. [263] Solaiman, I., C. Dennison.
2309.07864#246
2309.07864#248
2309.07864
[ "2305.08982" ]
2309.07864#248
The Rise and Potential of Large Language Model Based Agents: A Survey
Process for adapting language models to society (palms) with values-targeted datasets. Advances in Neural Information Processing Systems, 34:5861â 5873, 2021. [264] Bach, S. H., V. Sanh, Z. X. Yong, et al. Promptsource: An integrated development environment and repository for natural language prompts. In V. Basile, Z. Kozareva, S. Stajner, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - System Demonstrations, Dublin, Ireland, May 22-27, 2022, pages 93â
2309.07864#247
2309.07864#249
2309.07864
[ "2305.08982" ]
2309.07864#249
The Rise and Potential of Large Language Model Based Agents: A Survey
104. Association for Computational Linguistics, 2022. [265] Iyer, S., X. V. Lin, R. Pasunuru, et al. OPT-IML: scaling language model instruction meta learning through the lens of generalization. CoRR, abs/2212.12017, 2022. [266] Winston, P. H. Learning and reasoning by analogy. Commun. ACM, 23(12):689â 703, 1980. [267] Lu, Y., M. Bartolo, A. Moore, et al.
2309.07864#248
2309.07864#250
2309.07864
[ "2305.08982" ]
2309.07864#250
The Rise and Potential of Large Language Model Based Agents: A Survey
Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In S. Muresan, P. Nakov, A. Villavicencio, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8086â 8098. Association for Computational Linguistics, 2022. [268] Tsimpoukelli, M., J. Menick, S. Cabi, et al.
2309.07864#249
2309.07864#251
2309.07864
[ "2305.08982" ]
2309.07864#251
The Rise and Potential of Large Language Model Based Agents: A Survey
Multimodal few-shot learning with frozen language models. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, J. W. Vaughan, eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 200â 212. 2021. [269] Bar, A., Y. Gandelsman, T. Darrell, et al.
2309.07864#250
2309.07864#252
2309.07864
[ "2305.08982" ]
2309.07864#252
The Rise and Potential of Large Language Model Based Agents: A Survey
Visual prompting via image inpainting. In NeurIPS. 2022. [270] Zhu, W., H. Liu, Q. Dong, et al. Multilingual machine translation with large language models: Empirical results and analysis. CoRR, abs/2304.04675, 2023. 63 [271] Zhang, Z., L. Zhou, C. Wang, et al. Speak foreign languages with your own voice: Cross- lingual neural codec language modeling. CoRR, abs/2303.03926, 2023. [272] Zhang, J., J. Zhang, K. Pertsch, et al.
2309.07864#251
2309.07864#253
2309.07864
[ "2305.08982" ]
2309.07864#253
The Rise and Potential of Large Language Model Based Agents: A Survey
Bootstrap your own skills: Learning to solve new tasks with large language model guidance. In 7th Annual Conference on Robot Learning. 2023. [273] McCloskey, M., N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of Learning and Motivation, 24:109â 165, 1989. [274] Kirkpatrick, J., R. Pascanu, N. Rabinowitz, et al.
2309.07864#252
2309.07864#254
2309.07864
[ "2305.08982" ]
2309.07864#254
The Rise and Potential of Large Language Model Based Agents: A Survey
Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521â 3526, 2017. [275] Li, Z., D. Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935â 2947, 2017. [276] Farajtabar, M., N. Azizan, A. Mott, et al. Orthogonal gradient descent for continual learning.
2309.07864#253
2309.07864#255
2309.07864
[ "2305.08982" ]
2309.07864#255
The Rise and Potential of Large Language Model Based Agents: A Survey
In International Conference on Artificial Intelligence and Statistics, pages 3762â 3773. PMLR, 2020. [277] Smith, J. S., Y.-C. Hsu, L. Zhang, et al. Continual diffusion: Continual customization of text-to-image diffusion with c-lora. arXiv preprint arXiv:2304.06027, 2023. [278] Lopez-Paz, D., M. Ranzato.
2309.07864#254
2309.07864#256
2309.07864
[ "2305.08982" ]
2309.07864#256
The Rise and Potential of Large Language Model Based Agents: A Survey
Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. [279] de Masson Dâ Autume, C., S. Ruder, L. Kong, et al. Episodic memory in lifelong language learning. Advances in Neural Information Processing Systems, 32, 2019. [280] Rolnick, D., A. Ahuja, J. Schwarz, et al. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019.
2309.07864#255
2309.07864#257
2309.07864
[ "2305.08982" ]
2309.07864#257
The Rise and Potential of Large Language Model Based Agents: A Survey
[281] Serrà, J., D. Surà s, M. Miron, et al. Overcoming catastrophic forgetting with hard attention to the task. In International Conference on Machine Learning. 2018. [282] Dosovitskiy, A., L. Beyer, A. Kolesnikov, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. [283] van den Oord, A., O. Vinyals, K. Kavukcuoglu.
2309.07864#256
2309.07864#258
2309.07864
[ "2305.08982" ]
2309.07864#258
The Rise and Potential of Large Language Model Based Agents: A Survey
Neural discrete representation learning. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett, eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6306â 6315. 2017. [284] Mehta, S., M. Rastegari. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer.
2309.07864#257
2309.07864#259
2309.07864
[ "2305.08982" ]
2309.07864#259
The Rise and Potential of Large Language Model Based Agents: A Survey
In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [285] Tolstikhin, I. O., N. Houlsby, A. Kolesnikov, et al. Mlp-mixer: An all-mlp architecture for vision. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, J. W. Vaughan, eds., Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 24261â 24272. 2021. [286] Huang, S., L. Dong, W. Wang, et al.
2309.07864#258
2309.07864#260
2309.07864
[ "2305.08982" ]
2309.07864#260
The Rise and Potential of Large Language Model Based Agents: A Survey
Language is not all you need: Aligning perception with language models. CoRR, abs/2302.14045, 2023. [287] Li, J., D. Li, S. Savarese, et al. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J.
2309.07864#259
2309.07864#261
2309.07864
[ "2305.08982" ]
2309.07864#261
The Rise and Potential of Large Language Model Based Agents: A Survey
Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 19730â 19742. PMLR, 2023. [288] Dai, W., J. Li, D. Li, et al. Instructblip: Towards general-purpose vision-language models with instruction tuning. CoRR, abs/2305.06500, 2023.
2309.07864#260
2309.07864#262
2309.07864
[ "2305.08982" ]
2309.07864#262
The Rise and Potential of Large Language Model Based Agents: A Survey
64 [289] Gong, T., C. Lyu, S. Zhang, et al. Multimodal-gpt: A vision and language model for dialogue with humans. CoRR, abs/2305.04790, 2023. [290] Alayrac, J., J. Donahue, P. Luc, et al. Flamingo: a visual language model for few-shot learning. In NeurIPS. 2022. [291] Su, Y., T. Lan, H. Li, et al. Pandagpt: One model to instruction-follow them all. CoRR, abs/2305.16355, 2023. [292] Liu, H., C. Li, Q. Wu, et al.
2309.07864#261
2309.07864#263
2309.07864
[ "2305.08982" ]
2309.07864#263
The Rise and Potential of Large Language Model Based Agents: A Survey
Visual instruction tuning. CoRR, abs/2304.08485, 2023. [293] Huang, R., M. Li, D. Yang, et al. Audiogpt: Understanding and generating speech, music, sound, and talking head. CoRR, abs/2304.12995, 2023. [294] Gong, Y., Y. Chung, J. R. Glass. AST: audio spectrogram transformer. In H. Hermansky, H. Cernocký, L. Burget, L. Lamel, O. Scharenborg, P. Motlà cek, eds., Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 571â 575. ISCA, 2021. [295] Hsu, W., B. Bolte, Y. H. Tsai, et al. Hubert:
2309.07864#262
2309.07864#264
2309.07864
[ "2305.08982" ]
2309.07864#264
The Rise and Potential of Large Language Model Based Agents: A Survey
Self-supervised speech representation learning IEEE ACM Trans. Audio Speech Lang. Process., by masked prediction of hidden units. 29:3451â 3460, 2021. [296] Chen, F., M. Han, H. Zhao, et al. X-LLM: bootstrapping advanced large language models by treating multi-modalities as foreign languages. CoRR, abs/2305.04160, 2023. [297] Zhang, H., X. Li, L. Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding.
2309.07864#263
2309.07864#265
2309.07864
[ "2305.08982" ]
2309.07864#265
The Rise and Potential of Large Language Model Based Agents: A Survey
CoRR, abs/2306.02858, 2023. [298] Liu, Z., Y. He, W. Wang, et al. Interngpt: Solving vision-centric tasks by interacting with chatbots beyond language. CoRR, abs/2305.05662, 2023. [299] Hubel, D. H., T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in the catâ s visual cortex.
2309.07864#264
2309.07864#266
2309.07864
[ "2305.08982" ]
2309.07864#266
The Rise and Potential of Large Language Model Based Agents: A Survey
The Journal of physiology, 160(1):106, 1962. [300] Logothetis, N. K., D. L. Sheinberg. Visual object recognition. Annual review of neuroscience, 19(1):577â 621, 1996. [301] OpenAI. Openai: Introducing chatgpt. Website, 2022. https://openai.com/blog/ chatgpt. [302] Lu, J., X. Ren, Y. Ren, et al.
2309.07864#265
2309.07864#267
2309.07864
[ "2305.08982" ]
2309.07864#267
The Rise and Potential of Large Language Model Based Agents: A Survey
Improving contextual language models for response retrieval in multi-turn conversation. In J. X. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, Y. Liu, eds., Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1805â
2309.07864#266
2309.07864#268
2309.07864
[ "2305.08982" ]
2309.07864#268
The Rise and Potential of Large Language Model Based Agents: A Survey
1808. ACM, 2020. [303] Huang, L., W. Wang, J. Chen, et al. Attention on attention for image captioning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 4633â 4642. IEEE, 2019. [304] Pan, Y., T. Yao, Y. Li, et al. X-linear attention networks for image captioning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10968â
2309.07864#267
2309.07864#269
2309.07864
[ "2305.08982" ]
2309.07864#269
The Rise and Potential of Large Language Model Based Agents: A Survey
10977. Computer Vision Foundation / IEEE, 2020. [305] Cornia, M., M. Stefanini, L. Baraldi, et al. M2: Meshed-memory transformer for image captioning. CoRR, abs/1912.08226, 2019. [306] Chen, J., H. Guo, K. Yi, et al. Visualgpt: Data-efficient image captioning by balancing visual input and linguistic knowledge from pretraining.
2309.07864#268
2309.07864#270
2309.07864
[ "2305.08982" ]
2309.07864#270
The Rise and Potential of Large Language Model Based Agents: A Survey
CoRR, abs/2102.10407, 2021. [307] Li, K., Y. He, Y. Wang, et al. Videochat: Chat-centric video understanding. CoRR, abs/2305.06355, 2023. 65 [308] Lin, J., Y. Du, O. Watkins, et al. Learning to model the world with language. CoRR, abs/2308.01399, 2023. [309] Vaswani, A., N. Shazeer, N. Parmar, et al.
2309.07864#269
2309.07864#271
2309.07864
[ "2305.08982" ]
2309.07864#271
The Rise and Potential of Large Language Model Based Agents: A Survey
Attention is all you need. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, R. Garnett, eds., Advances in Neural Information Processing Systems 30: Annual Conference on Neural In- formation Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998â 6008. 2017. [310] Touvron, H., M. Cord, M. Douze, et al.
2309.07864#270
2309.07864#272
2309.07864
[ "2305.08982" ]
2309.07864#272
The Rise and Potential of Large Language Model Based Agents: A Survey
Training data-efficient image transformers & distil- lation through attention. In M. Meila, T. Zhang, eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, vol. 139 of Proceedings of Machine Learning Research, pages 10347â 10357. PMLR, 2021. [311] Lu, J., C. Clark, R. Zellers, et al.
2309.07864#271
2309.07864#273
2309.07864
[ "2305.08982" ]
2309.07864#273
The Rise and Potential of Large Language Model Based Agents: A Survey
UNIFIED-IO: A unified model for vision, language, and multi-modal tasks. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [312] Peng, Z., W. Wang, L. Dong, et al. Kosmos-2: Grounding multimodal large language models to the world. CoRR, abs/2306.14824, 2023. [313] Lyu, C., M. Wu, L. Wang, et al. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. CoRR, abs/2306.09093, 2023. [314] Maaz, M., H. A. Rasheed, S. H. Khan, et al. Video-chatgpt: Towards detailed video under- standing via large vision and language models. CoRR, abs/2306.05424, 2023. [315] Chen, M., I. Laina, A.
2309.07864#272
2309.07864#274
2309.07864
[ "2305.08982" ]
2309.07864#274
The Rise and Potential of Large Language Model Based Agents: A Survey
Vedaldi. Training-free layout control with cross-attention guidance. CoRR, abs/2304.03373, 2023. [316] Radford, A., J. W. Kim, T. Xu, et al. Robust speech recognition via large-scale weak su- pervision. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 28492â 28518. PMLR, 2023. [317] Ren, Y., Y. Ruan, X. Tan, et al.
2309.07864#273
2309.07864#275
2309.07864
[ "2305.08982" ]
2309.07864#275
The Rise and Potential of Large Language Model Based Agents: A Survey
Fastspeech: Fast, robust and controllable text to speech. In H. M. Wallach, H. Larochelle, A. Beygelzimer, F. dâ Alché-Buc, E. B. Fox, R. Garnett, eds., Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3165â 3174. 2019. [318] Ye, Z., Z. Zhao, Y. Ren, et al.
2309.07864#274
2309.07864#276
2309.07864
[ "2305.08982" ]
2309.07864#276
The Rise and Potential of Large Language Model Based Agents: A Survey
Syntaspeech: Syntax-aware generative adversarial text-to-speech. In L. D. Raedt, ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4468â 4474. ijcai.org, 2022. [319] Kim, J., J. Kong, J. Son. Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. In M. Meila, T. Zhang, eds., Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, vol. 139 of Proceedings of Machine Learning Research, pages 5530â 5540. PMLR, 2021. [320] Wang, Z., S. Cornell, S. Choi, et al.
2309.07864#275
2309.07864#277
2309.07864
[ "2305.08982" ]
2309.07864#277
The Rise and Potential of Large Language Model Based Agents: A Survey
Tf-gridnet: Integrating full- and sub-band modeling for speech separation. IEEE ACM Trans. Audio Speech Lang. Process., 31:3221â 3236, 2023. [321] Liu, J., C. Li, Y. Ren, et al. Diffsinger: Singing voice synthesis via shallow diffusion mechanism. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty- Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 11020â
2309.07864#276
2309.07864#278
2309.07864
[ "2305.08982" ]
2309.07864#278
The Rise and Potential of Large Language Model Based Agents: A Survey
11028. AAAI Press, 2022. [322] Inaguma, H., S. Dalmia, B. Yan, et al. Fast-md: Fast multi-decoder end-to-end speech transla- tion with non-autoregressive hidden intermediates. In IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021, Cartagena, Colombia, December 13-17, 2021, pages 922â 929. IEEE, 2021. 66 [323] Flanagan, J. L. Speech analysis synthesis and perception, vol. 3. Springer Science & Business Media, 2013. [324] Schwarz, B.
2309.07864#277
2309.07864#279
2309.07864
[ "2305.08982" ]
2309.07864#279
The Rise and Potential of Large Language Model Based Agents: A Survey
Mapping the world in 3d. Nature Photonics, 4(7):429â 430, 2010. [325] Parkinson, B. W., J. J. Spilker. Progress in astronautics and aeronautics: Global positioning system: Theory and applications, vol. 164. Aiaa, 1996. [326] Parisi, A., Y. Zhao, N. Fiedel. TALM: tool augmented language models. CoRR, abs/2205.12255, 2022. [327] Clarebout, G., J. Elen, N. A. J. Collazo, et al. Metacognition and the Use of Tools, pages 187â 195. Springer New York, New York, NY, 2013. [328] Wu, C., S. Yin, W. Qi, et al. Visual chatgpt: Talking, drawing and editing with visual foundation models.
2309.07864#278
2309.07864#280
2309.07864
[ "2305.08982" ]
2309.07864#280
The Rise and Potential of Large Language Model Based Agents: A Survey
CoRR, abs/2303.04671, 2023. [329] Cai, T., X. Wang, T. Ma, et al. Large language models as tool makers. CoRR, abs/2305.17126, 2023. [330] Qian, C., C. Han, Y. R. Fung, et al. CREATOR: disentangling abstract and concrete reasonings of large language models through tool creation. CoRR, abs/2305.14318, 2023. [331] Chen, X., M. Lin, N. Schärli, et al. Teaching large language models to self-debug. CoRR, abs/2304.05128, 2023. [332] Liu, H., L. Lee, K. Lee, et al. Instruction-following agents with jointly pre-trained vision- language models. arXiv preprint arXiv:2210.13431, 2022. [333] Lynch, C., A. Wahid, J. Tompson, et al.
2309.07864#279
2309.07864#281
2309.07864
[ "2305.08982" ]
2309.07864#281
The Rise and Potential of Large Language Model Based Agents: A Survey
Interactive language: Talking to robots in real time. CoRR, abs/2210.06407, 2022. [334] Jin, C., W. Tan, J. Yang, et al. Alphablock: Embodied finetuning for vision-language reasoning in robot manipulation. CoRR, abs/2305.18898, 2023. [335] Shah, D., B. Osinski, B. Ichter, et al. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In K. Liu, D. Kulic, J. Ichnowski, eds., Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, vol. 205 of Proceedings of Machine Learning Research, pages 492â 504. PMLR, 2022. [336] Zhou, G., Y. Hong, Q. Wu.
2309.07864#280
2309.07864#282
2309.07864
[ "2305.08982" ]
2309.07864#282
The Rise and Potential of Large Language Model Based Agents: A Survey
Navgpt: Explicit reasoning in vision-and-language navigation with large language models. CoRR, abs/2305.16986, 2023. [337] Fan, L., G. Wang, Y. Jiang, et al. Minedojo: Building open-ended embodied agents with internet-scale knowledge. In NeurIPS. 2022. [338] Kanitscheider, I., J. Huizinga, D. Farhi, et al. Multi-task curriculum learning in a complex, visual, hard-exploration domain:
2309.07864#281
2309.07864#283
2309.07864
[ "2305.08982" ]
2309.07864#283
The Rise and Potential of Large Language Model Based Agents: A Survey
Minecraft. CoRR, abs/2106.14876, 2021. [339] Nottingham, K., P. Ammanabrolu, A. Suhr, et al. Do embodied agents dream of pixelated sheep: Embodied decision making using language guided world modelling. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 26311â 26325. PMLR, 2023. [340] Sumers, T., K. Marino, A. Ahuja, et al.
2309.07864#282
2309.07864#284
2309.07864
[ "2305.08982" ]
2309.07864#284
The Rise and Potential of Large Language Model Based Agents: A Survey
Distilling internet-scale vision-language models into embodied agents. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol. 202 of Proceedings of Machine Learning Research, pages 32797â 32818. PMLR, 2023. [341] Carlini, N., J. Hayes, M. Nasr, et al.
2309.07864#283
2309.07864#285
2309.07864
[ "2305.08982" ]
2309.07864#285
The Rise and Potential of Large Language Model Based Agents: A Survey
Extracting training data from diffusion models. CoRR, abs/2301.13188, 2023. 67 [342] Savelka, J., K. D. Ashley, M. A. Gray, et al. Can GPT-4 support analysis of textual data in tasks requiring highly specialized domain expertise? In F. Lagioia, J. Mumford, D. Odekerken, H. Westermann, eds., Proceedings of the 6th Workshop on Automated Semantic Analysis of Information in Legal Text co-located with the 19th International Conference on Artificial Intelligence and Law (ICAIL 2023), Braga, Portugal, 23rd September, 2023, vol. 3441 of CEUR Workshop Proceedings, pages 1â 12. CEUR-WS.org, 2023. [343] Ling, C., X. Zhao, J. Lu, et al.
2309.07864#284
2309.07864#286
2309.07864
[ "2305.08982" ]
2309.07864#286
The Rise and Potential of Large Language Model Based Agents: A Survey
Domain specialization as the key to make large language models disruptive: A comprehensive survey, 2023. [344] Linardatos, P., V. Papastefanopoulos, S. Kotsiantis. Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1):18, 2021. [345] Zou, A., Z. Wang, J. Z. Kolter, et al. Universal and transferable adversarial attacks on aligned language models. CoRR, abs/2307.15043, 2023. [346] Hussein, A., M. M. Gaber, E. Elyan, et al. Imitation learning: A survey of learning methods.
2309.07864#285
2309.07864#287
2309.07864
[ "2305.08982" ]
2309.07864#287
The Rise and Potential of Large Language Model Based Agents: A Survey
ACM Comput. Surv., 50(2):21:1â 21:35, 2017. [347] Liu, Y., A. Gupta, P. Abbeel, et al. Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018, pages 1118â 1125. IEEE, 2018. [348] Baker, B., I. Akkaya, P. Zhokov, et al.
2309.07864#286
2309.07864#288
2309.07864
[ "2305.08982" ]
2309.07864#288
The Rise and Potential of Large Language Model Based Agents: A Survey
Video pretraining (VPT): learning to act by watching unlabeled online videos. In NeurIPS. 2022. [349] Levine, S., P. Pastor, A. Krizhevsky, et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robotics Res., 37(4-5):421â 436, 2018. [350] Zheng, R., S. Dou, S. Gao, et al.
2309.07864#287
2309.07864#289
2309.07864
[ "2305.08982" ]
2309.07864#289
The Rise and Potential of Large Language Model Based Agents: A Survey
Secrets of RLHF in large language models part I: PPO. CoRR, abs/2307.04964, 2023. [351] Bengio, Y., J. Louradour, R. Collobert, et al. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML â 09, page 41â 48. Association for Computing Machinery, New York, NY, USA, 2009. [352] Chen, M., J. Tworek, H. Jun, et al.
2309.07864#288
2309.07864#290
2309.07864
[ "2305.08982" ]
2309.07864#290
The Rise and Potential of Large Language Model Based Agents: A Survey
Evaluating large language models trained on code, 2021. [353] Pan, S., L. Luo, Y. Wang, et al. Unifying large language models and knowledge graphs: A roadmap. CoRR, abs/2306.08302, 2023. [354] Bran, A. M., S. Cox, A. D. White, et al. Chemcrow: Augmenting large-language models with chemistry tools, 2023. [355] Ruan, J., Y. Chen, B. Zhang, et al.
2309.07864#289
2309.07864#291
2309.07864
[ "2305.08982" ]
2309.07864#291
The Rise and Potential of Large Language Model Based Agents: A Survey
TPTU: task planning and tool usage of large language model-based AI agents. CoRR, abs/2308.03427, 2023. [356] Ogundare, O., S. Madasu, N. Wiggins. Industrial engineering with large language models: A case study of chatgptâ s performance on oil & gas problems, 2023. [357] Smith, L., M. Gasser. The development of embodied cognition: Six lessons from babies. Artificial life, 11(1-2):13â
2309.07864#290
2309.07864#292
2309.07864
[ "2305.08982" ]
2309.07864#292
The Rise and Potential of Large Language Model Based Agents: A Survey
29, 2005. [358] Duan, J., S. Yu, H. L. Tan, et al. A survey of embodied AI: from simulators to research tasks. IEEE Trans. Emerg. Top. Comput. Intell., 6(2):230â 244, 2022. [359] Mnih, V., K. Kavukcuoglu, D. Silver, et al. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013. [360] Silver, D., A. Huang, C. J. Maddison, et al.
2309.07864#291
2309.07864#293
2309.07864
[ "2305.08982" ]
2309.07864#293
The Rise and Potential of Large Language Model Based Agents: A Survey
Mastering the game of go with deep neural networks and tree search. Nat., 529(7587):484â 489, 2016. 68 [361] Kalashnikov, D., A. Irpan, P. Pastor, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. CoRR, abs/1806.10293, 2018. [362] Nguyen, H., H. M. La. Review of deep reinforcement learning for robot manipulation. In 3rd IEEE International Conference on Robotic Computing, IRC 2019, Naples, Italy, February 25-27, 2019, pages 590â
2309.07864#292
2309.07864#294
2309.07864
[ "2305.08982" ]
2309.07864#294
The Rise and Potential of Large Language Model Based Agents: A Survey
595. IEEE, 2019. [363] Dasgupta, I., C. Kaeser-Chen, K. Marino, et al. Collaborating with language models for embodied reasoning. CoRR, abs/2302.00763, 2023. [364] Puig, X., K. Ra, M. Boben, et al. Virtualhome: Simulating household activities via programs. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 8494â 8502. Computer Vision Foundation / IEEE Computer Society, 2018. [365] Hong, Y., Q. Wu, Y. Qi, et al.
2309.07864#293
2309.07864#295
2309.07864
[ "2305.08982" ]
2309.07864#295
The Rise and Potential of Large Language Model Based Agents: A Survey
A recurrent vision-and-language BERT for navigation. CoRR, abs/2011.13922, 2020. [366] Suglia, A., Q. Gao, J. Thomason, et al. Embodied BERT: A transformer model for embodied, language-guided visual task completion. CoRR, abs/2108.04927, 2021. [367] Ganesh, S., N. Vadori, M. Xu, et al.
2309.07864#294
2309.07864#296
2309.07864
[ "2305.08982" ]
2309.07864#296
The Rise and Potential of Large Language Model Based Agents: A Survey
Reinforcement learning for market making in a multi-agent dealer market. CoRR, abs/1911.05892, 2019. [368] Tipaldi, M., R. Iervolino, P. R. Massenio. Reinforcement learning in spacecraft control applications: Advances, prospects, and challenges. Annu. Rev. Control., 54:1â 23, 2022. [369] Savva, M., J. Malik, D. Parikh, et al. Habitat: A platform for embodied AI research. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 9338â
2309.07864#295
2309.07864#297
2309.07864
[ "2305.08982" ]
2309.07864#297
The Rise and Potential of Large Language Model Based Agents: A Survey
9346. IEEE, 2019. [370] Longpre, S., L. Hou, T. Vu, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. [371] Wang, Y., Y. Kordi, S. Mishra, et al. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022. [372] Liang, J., W. Huang, F. Xia, et al.
2309.07864#296
2309.07864#298
2309.07864
[ "2305.08982" ]
2309.07864#298
The Rise and Potential of Large Language Model Based Agents: A Survey
Code as policies: Language model programs for embodied control. In IEEE International Conference on Robotics and Automation, ICRA 2023, London, UK, May 29 - June 2, 2023, pages 9493â 9500. IEEE, 2023. [373] Li, C., F. Xia, R. Martà n-Martà n, et al. HRL4IN: hierarchical reinforcement learning for interactive navigation with mobile manipulators. In L. P. Kaelbling, D. Kragic, K. Sugiura, eds., 3rd Annual Conference on Robot Learning, CoRL 2019, Osaka, Japan, October 30 - November 1, 2019, Proceedings, vol. 100 of Proceedings of Machine Learning Research, pages 603â
2309.07864#297
2309.07864#299
2309.07864
[ "2305.08982" ]
2309.07864#299
The Rise and Potential of Large Language Model Based Agents: A Survey
616. PMLR, 2019. [374] Eppe, M., C. Gumbsch, M. Kerzel, et al. Hierarchical principles of embodied reinforcement learning: A review. CoRR, abs/2012.10147, 2020. [375] Paul, S., A. Roy-Chowdhury, A. Cherian. AVLEN: audio-visual-language embodied navigation in 3d environments. In NeurIPS. 2022.
2309.07864#298
2309.07864#300
2309.07864
[ "2305.08982" ]
2309.07864#300
The Rise and Potential of Large Language Model Based Agents: A Survey
[376] Hu, B., C. Zhao, P. Zhang, et al. Enabling intelligent interactions between an agent and an LLM: A reinforcement learning approach. CoRR, abs/2306.03604, 2023. [377] Chen, C., U. Jain, C. Schissler, et al. Soundspaces: Audio-visual navigation in 3d environments. In A. Vedaldi, H. Bischof, T. Brox, J.
2309.07864#299
2309.07864#301
2309.07864
[ "2305.08982" ]
2309.07864#301
The Rise and Potential of Large Language Model Based Agents: A Survey
Frahm, eds., Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VI, vol. 12351 of Lecture Notes in Computer Science, pages 17â 36. Springer, 2020. [378] Huang, R., Y. Ren, J. Liu, et al. Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech. In NeurIPS. 2022.
2309.07864#300
2309.07864#302
2309.07864
[ "2305.08982" ]
2309.07864#302
The Rise and Potential of Large Language Model Based Agents: A Survey
69 [379] Shah, D., B. Eysenbach, G. Kahn, et al. Ving: Learning open-world navigation with visual goals. In IEEE International Conference on Robotics and Automation, ICRA 2021, Xiâ an, China, May 30 - June 5, 2021, pages 13215â 13222. IEEE, 2021. [380] Huang, C., O. Mees, A. Zeng, et al.
2309.07864#301
2309.07864#303
2309.07864
[ "2305.08982" ]
2309.07864#303
The Rise and Potential of Large Language Model Based Agents: A Survey
Visual language maps for robot navigation. In IEEE International Conference on Robotics and Automation, ICRA 2023, London, UK, May 29 - June 2, 2023, pages 10608â 10615. IEEE, 2023. [381] Georgakis, G., K. Schmeckpeper, K. Wanchoo, et al. Cross-modal map learning for vision and language navigation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 15439â
2309.07864#302
2309.07864#304
2309.07864
[ "2305.08982" ]
2309.07864#304
The Rise and Potential of Large Language Model Based Agents: A Survey
15449. IEEE, 2022. [382] Dorbala, V. S., J. F. M. Jr., D. Manocha. Can an embodied agent find your "cat-shaped mug"? llm-based zero-shot object navigation. CoRR, abs/2303.03480, 2023. [383] Li, L. H., P. Zhang, H. Zhang, et al. Grounded language-image pre-training. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10955â
2309.07864#303
2309.07864#305
2309.07864
[ "2305.08982" ]
2309.07864#305
The Rise and Potential of Large Language Model Based Agents: A Survey
10965. IEEE, 2022. [384] Gan, C., Y. Zhang, J. Wu, et al. Look, listen, and act: Towards audio-visual embodied navigation. In 2020 IEEE International Conference on Robotics and Automation, ICRA 2020, Paris, France, May 31 - August 31, 2020, pages 9701â 9707. IEEE, 2020. [385] Brohan, A., N. Brown, J. Carbajal, et al.
2309.07864#304
2309.07864#306
2309.07864
[ "2305.08982" ]
2309.07864#306
The Rise and Potential of Large Language Model Based Agents: A Survey
RT-1: robotics transformer for real-world control at scale. CoRR, abs/2212.06817, 2022. [386] â . RT-2: vision-language-action models transfer web knowledge to robotic control. CoRR, abs/2307.15818, 2023. [387] PrismarineJS, 2013. [388] Gur, I., H. Furuta, A. Huang, et al. A real-world webagent with planning, long context understanding, and program synthesis. CoRR, abs/2307.12856, 2023. [389] Deng, X., Y. Gu, B. Zheng, et al. Mind2web: Towards a generalist agent for the web.
2309.07864#305
2309.07864#307
2309.07864
[ "2305.08982" ]
2309.07864#307
The Rise and Potential of Large Language Model Based Agents: A Survey
CoRR, abs/2306.06070, 2023. [390] Furuta, H., O. Nachum, K. Lee, et al. Multimodal web navigation with instruction-finetuned foundation models. CoRR, abs/2305.11854, 2023. [391] Zhou, S., F. F. Xu, H. Zhu, et al. Webarena: A realistic web environment for building autonomous agents. CoRR, abs/2307.13854, 2023. [392] Yao, S., H. Chen, J. Yang, et al.
2309.07864#306
2309.07864#308
2309.07864
[ "2305.08982" ]
2309.07864#308
The Rise and Potential of Large Language Model Based Agents: A Survey
Webshop: Towards scalable real-world web interaction with grounded language agents. In NeurIPS. 2022. [393] Kim, G., P. Baldi, S. McAleer. Language models can solve computer tasks. CoRR, abs/2303.17491, 2023. [394] Zheng, L., R. Wang, B. An. Synapse: Leveraging few-shot exemplars for human-level computer control. CoRR, abs/2306.07863, 2023. [395] Chen, P., C. Chang. Interact: Exploring the potentials of chatgpt as a cooperative agent. CoRR, abs/2308.01552, 2023. [396] Gramopadhye, M., D.
2309.07864#307
2309.07864#309
2309.07864
[ "2305.08982" ]
2309.07864#309
The Rise and Potential of Large Language Model Based Agents: A Survey
Szafir. Generating executable action plans with environmentally-aware language models. CoRR, abs/2210.04964, 2022. [397] Li, H., Y. Hao, Y. Zhai, et al. The hitchhikerâ s guide to program analysis: A journey with large language models. CoRR, abs/2308.00245, 2023. [398] Feldt, R., S. Kang, J. Yoon, et al.
2309.07864#308
2309.07864#310
2309.07864
[ "2305.08982" ]
2309.07864#310
The Rise and Potential of Large Language Model Based Agents: A Survey
Towards autonomous testing agents via conversational large language models. CoRR, abs/2306.05152, 2023. 70 [399] Kang, Y., J. Kim. Chatmof: An autonomous AI system for predicting and generating metal- organic frameworks. CoRR, abs/2308.01423, 2023. [400] Wang, R., P. A. Jansen, M. Côté, et al. Scienceworld: Is your agent smarter than a 5th grader? In Y. Goldberg, Z. Kozareva, Y. Zhang, eds., Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 11279â 11298. Association for Computational Linguistics, 2022. [401] Yuan, H., C. Zhang, H. Wang, et al.
2309.07864#309
2309.07864#311
2309.07864
[ "2305.08982" ]
2309.07864#311
The Rise and Potential of Large Language Model Based Agents: A Survey
Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks. CoRR, abs/2303.16563, 2023. [402] Hao, R., L. Hu, W. Qi, et al. Chatllm network: More brains, more intelligence. CoRR, abs/2304.12998, 2023. [403] Mandi, Z., S. Jain, S. Song. Roco: Dialectic multi-robot collaboration with large language models. CoRR, abs/2307.04738, 2023.
2309.07864#310
2309.07864#312
2309.07864
[ "2305.08982" ]
2309.07864#312
The Rise and Potential of Large Language Model Based Agents: A Survey
[404] Hamilton, S. Blind judgement: Agent-based supreme court modelling with GPT. CoRR, abs/2301.05327, 2023. [405] Hong, S., X. Zheng, J. Chen, et al. Metagpt: Meta programming for multi-agent collaborative framework. CoRR, abs/2308.00352, 2023. [406] Wu, Q., G. Bansal, J. Zhang, et al. Autogen: Enabling next-gen LLM applications via multi-agent conversation framework. CoRR, abs/2308.08155, 2023. [407] Zhang, C., K. Yang, S. Hu, et al. Proagent: Building proactive cooperative AI with large language models. CoRR, abs/2308.11339, 2023. [408] Nair, V., E. Schumacher, G. J. Tso, et al.
2309.07864#311
2309.07864#313
2309.07864
[ "2305.08982" ]
2309.07864#313
The Rise and Potential of Large Language Model Based Agents: A Survey
DERA: enhancing large language model completions with dialog-enabled resolving agents. CoRR, abs/2303.17071, 2023. [409] Talebirad, Y., A. Nadiri. Multi-agent collaboration: Harnessing the power of intelligent LLM agents. CoRR, abs/2306.03314, 2023. [410] Chen, W., Y. Su, J. Zuo, et al. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents. CoRR, abs/2308.10848, 2023. [411] Shi, J., J. Zhao, Y. Wang, et al.
2309.07864#312
2309.07864#314
2309.07864
[ "2305.08982" ]
2309.07864#314
The Rise and Potential of Large Language Model Based Agents: A Survey
CGMI: configurable general multi-agent interaction framework. CoRR, abs/2308.12503, 2023. [412] Xiong, K., X. Ding, Y. Cao, et al. Examining the inter-consistency of large language models: An in-depth analysis via debate. CoRR, abs/2305.11595, 2023. [413] Kalvakurthi, V., A. S. Varde, J. Jenq.
2309.07864#313
2309.07864#315
2309.07864
[ "2305.08982" ]
2309.07864#315
The Rise and Potential of Large Language Model Based Agents: A Survey
Hey dona! can you help me with student course registration? CoRR, abs/2303.13548, 2023. [414] Swan, M., T. Kido, E. Roland, et al. Math agents: Computational infrastructure, mathematical embedding, and genomics. CoRR, abs/2307.02502, 2023. [415] Hsu, S.-L., R. S. Shah, P. Senthil, et al. Helping the helper: Supporting peer counselors via ai-empowered practice and feedback. arXiv preprint arXiv:2305.08982, 2023. [416] Zhang, H., J. Chen, F. Jiang, et al. Huatuogpt, towards taming language model to be a doctor. CoRR, abs/2305.15075, 2023. [417] Yang, S., H. Zhao, S. Zhu, et al. Zhongjing: Enhancing the chinese medical capabilities of large language model through expert feedback and real-world multi-turn dialogue.
2309.07864#314
2309.07864#316
2309.07864
[ "2305.08982" ]
2309.07864#316
The Rise and Potential of Large Language Model Based Agents: A Survey
CoRR, abs/2308.03549, 2023. [418] Ali, M. R., S. Z. Razavi, R. Langevin, et al. A virtual conversational agent for teens with autism spectrum disorder: Experimental results and design lessons. In S. Marsella, R. Jack, H. H. Vilhjálmsson, P. Sequeira, E. S. Cross, eds., IVA â 20: ACM International Conference on Intelligent Virtual Agents, Virtual Event, Scotland, UK, October 20-22, 2020, pages 2:1â 2:8. ACM, 2020.
2309.07864#315
2309.07864#317
2309.07864
[ "2305.08982" ]
2309.07864#317
The Rise and Potential of Large Language Model Based Agents: A Survey
71 [419] Gao, W., X. Gao, Y. Tang. Multi-turn dialogue agent as salesâ assistant in telemarketing. In International Joint Conference on Neural Networks, IJCNN 2023, Gold Coast, Australia, June 18-23, 2023, pages 1â 9. IEEE, 2023. [420] Schick, T., J. A. Yu, Z. Jiang, et al. PEER: A collaborative language model. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [421] Lu, B., N. Haduong, C. Lee, et al. DIALGEN: collaborative human-lm generated dialogues for improved understanding of human-human conversations. CoRR, abs/2307.07047, 2023. [422] Gao, D., L. Ji, L. Zhou, et al. Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn.
2309.07864#316
2309.07864#318
2309.07864
[ "2305.08982" ]