id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.02427#57
Cognitive Architectures for Language Agents
15 Action space: thinking beyond external tools or actions. Although â action spaceâ is a standard term in reinforcement learning, it has been used sparingly with language agents. CoALA argues for defining a clear and task-suitable action space with both internal (reasoning, retrieval, learning) and external (grounding) actions, which will help systematize and inform the agent design. â ¢ Size of the action space. More capable agents (e.g., Voyager, Generative Agents) have larger action spaces â which in turn means they face a more complex decision-making problem. As a result, these agents rely on more customized or hand-crafted decision procedures. The tradeoff of the action space vs. decision-making complexities is a basic problem to be considered before agent development, and taking the minimal action space necessary to solve a given task might be preferred.
2309.02427#56
2309.02427#58
2309.02427
[ "2305.14909" ]
2309.02427#58
Cognitive Architectures for Language Agents
â ¢ Safety of the action space. Some parts of the action space are inherently riskier. â Learningâ actions (especially procedural deletion and modification) could cause internal harm, while â groundingâ actions (e.g., â rmâ in bash terminal, harmful speech in human dialog, holding a knife in physical environments) could cause external harm. Today, safety measures are typically task-specific heuristics (e.g., remove â osâ operations in Python (Chen et al., 2021), filter keywords in dialog (Chowdhery et al., 2022; Driess et al., 2023), limit robots to controlled environments (Ahn et al., 2022)). However, as agents are grounded to more complex environments with richer internal mechanisms, it may be necessary to specify and ablate the agentâ s action space for worst-case scenario prediction and prevention (Yao and Narasimhan, 2023). Decision making: thinking beyond action generation. We believe one of the most exciting future directions for language agents is decision-making: as detailed in Section 4.6, most works are still confined to proposing (or directly generating) a single action. Present agents have just scratched the surface of more deliberate, propose-evaluate-select decision-making procedures.
2309.02427#57
2309.02427#59
2309.02427
[ "2305.14909" ]
2309.02427#59
Cognitive Architectures for Language Agents
â ¢ Mixing language-based reasoning and code-based planning may offer the best of both worlds. Existing approaches either plan directly in natural language (Huang et al., 2022c; Ahn et al., 2022) or use LLMs to translate from natural language to structured world models (Wong et al., 2023; Liu et al., 2023a; Zhang et al., 2023a; Li et al., 2023a; Guan et al., 2023; Silver et al., 2022; 2023). Future work could integrate these: just as Soar incorporates a simulator for physical reasoning (Laird, 2022), agents may write and execute simulation code on the fly to evaluate the consequences of plans. See Section 7 for more discussion.
2309.02427#58
2309.02427#60
2309.02427
[ "2305.14909" ]
2309.02427#60
Cognitive Architectures for Language Agents
â ¢ Extending deliberative reasoning to real-world settings. Initial works have implemented classical planning and tree search (Yao et al., 2023; Hao et al., 2023; Liu et al., 2023a; Dagan et al., 2023), using toy tasks such as game of 24 or block building. Extending these schemes to more complicated tasks with grounding (Qin et al., 2023) and long-term memory is an exciting direction. â ¢ Metareasoning to improve efficiency. LLM calls are both slow and computationally intensive. Using LLMs for decision-making entails a balance between their computational cost and the utility of the resulting improved plan. Most LLM reasoning methods fix a search budget by specifying a depth of reasoning (Yao et al., 2023), but humans appear to adaptively allocate computation (Russek et al., 2022; Lieder and Griffiths, 2020; Callaway et al., 2022; Gershman et al., 2015). Future work should develop mechanisms to estimate the utility of planning (Laidlaw et al., 2023) and modify the decision procedure accordingly, either via amortization (fine-tuning the LLM based on the results of previous actions, e.g. Nguyen, 2023; Hamrick et al., 2019), routing among several decision sub-procedures (e.g., ReAct (Yao et al., 2022b) investigated backing off to CoT (Wei et al., 2022b) and vice versa), or updates to the decision-making procedure.
2309.02427#59
2309.02427#61
2309.02427
[ "2305.14909" ]
2309.02427#61
Cognitive Architectures for Language Agents
â ¢ Calibration and alignment. More complex decision-making is currently bottlenecked by issues such as over-confidence and miscalibration (Jiang et al., 2021; Braverman et al., 2020; Chen et al., 2022), misalignment with human values or bias (Liang et al., 2021; Feng et al., 2023), hallucinations in self-evaluation (Shinn et al., 2023), and lack of human-in-the-loop mechanisms in face of uncer- tainties (Nguyen et al., 2022a; Ren et al., 2023). Solving these issues will significantly improve LLMsâ
2309.02427#60
2309.02427#62
2309.02427
[ "2305.14909" ]
2309.02427#62
Cognitive Architectures for Language Agents
utilities as agent backbones. 16 # 7 Discussion Internal vs. external: what is the boundary between an agent and its environment? While humans or robots are clearly distinct from their embodied environment, digital language agents have less clear boundaries. For example, is a Wikipedia database an internal semantic memory or an external digital environment (Yao et al., 2022b)? If an agent iteratively executes and improves code before submitting an answer (Shinn et al., 2023; Yang et al., 2023), is the code execution internal or external? If a method consists of proposal and evaluation prompts (Yao et al., 2023), should it be considered a single agent or two collaborating simpler agents (proposer and evaluator)? We suggest the boundary question can be answered in terms of controllability and coupling. For example, Wikipedia is not controllable: it is an external environment that may be unexpectedly modified by other users. However, an offline version that only the agent may write to is controllable, and thus can be considered an internal memory. Similarly, code execution on a internal virtual environment should be considered an internal reasoning action, whereas code execution on an external machine (which may possess security vulnerabilities) should be considered an external grounding action. Lastly, if aspects of the agent â such as proposal and evaluation prompts â are designed for and dependent on each other, then they are tightly coupled and best conceptualized as components in an individual agent. In contrast, if the steps are independently useful, a multi-agent perspective may be more appropriate. While these dilemmas are primarily conceptual, such understanding can support systematic agent design and help the field align on shared terminology. Practioners may also just choose their preferred framing, as long as it is consistent and useful for their own work. Physical vs. digital: what differences beget attention? While animals only live once in the physical world, digital environments (e.g., the Internet) often allow sequential (via resets) and parallel trials. This means digital agents can more boldly explore (e.g., open a million webpages) and self-clone for parallel task solving (e.g., a million web agents try different web paths), which may result in decision-making procedures different from current ones inspired by human cognition (Griffiths, 2020). Learning vs. acting: how should agents continuously and autonomously learn?
2309.02427#61
2309.02427#63
2309.02427
[ "2305.14909" ]
2309.02427#63
Cognitive Architectures for Language Agents
In the CoALA framework, learning is a result action of a decision-making cycle just like grounding: the agent deliberately chooses to commit information to long-term memory. This is in contrast to most agents, which simply fix a learning schedule and only use decison making for external actions. Biological agents, however, do not have this luxury: they must balance learning against external actions in their lifetime, choosing when and what to learn (Mattar and Daw, 2018). More flexible language agents (Wang et al., 2023a; Park et al., 2023) would follow a similar design and treat learning on par with external actions. Learning could be proposed as a possible action during regular decision-making, allowing the agent to â
2309.02427#62
2309.02427#64
2309.02427
[ "2305.14909" ]
2309.02427#64
Cognitive Architectures for Language Agents
deferâ it until the appropriate time. GPT-4 vs GPT-N: how would agent design change with more powerful LLMs? Agent design is a moving target as new LLM capabilities emerge with scale (Wei et al., 2022a). For example, earlier language models such as GPT-2 (Radford et al., 2019) would not support LLM agents â indeed, work at that time needed to combine GPT-2 with reinforcement learning for action generation (Yao et al., 2020); GPT-3 (Brown et al., 2020) unlocked flexible few-shot and zero-shot reasoning for NLP tasks; while only GPT-4 (OpenAI, 2023a) starts to afford more reliable self-evaluation (Saunders et al., 2022; Shinn et al., 2023; Yao et al., 2023) and self-refinement (Madaan et al., 2023; Chen et al., 2023b). Will future LLMs further reduce the need for coded rules and extra-learned models? Will this necessitate changes to the CoALA framework? As a thought experiment, imagine GPT-N could â simulateâ memory, grounding, learning, and decision-making in context: list all the possible actions, simulate and evaluate each one, and maintain its entire long-term memory explicitly in a very long context. Or even more boldly: perhaps GPT-N+1 succeeds at generating the next action by simulating these implicitly in neurons, without any intermediate reasoning in context. While these extreme cases seem unlikely in the immediate future, incremental improvements may alter the importance of different CoALA components. For example, a longer context window could reduce the importance of long-term memory, while more powerful reasoning for internal evaluation and simulation could allow longer-horizon planning. In general, LLMs are not subject to biological limitations (Griffiths, 2020), and their emergent properties have been difficult to predict. Nonetheless, CoALA â and cognitive science more generally â may still help systematically organize tasks where language agents succeed or fail, and suggest code-based procedures to complement a given LLM on a given task.
2309.02427#63
2309.02427#65
2309.02427
[ "2305.14909" ]
2309.02427#65
Cognitive Architectures for Language Agents
Even in the most extreme 17 case, where GPT implements all of CoALAâ s mechanisms in neurons, it may be helpful to leverage CoALA as a conceptual guide to discover and interpret those implicit circuits. Of course, as discussed in Section 6, agent usecases will also help discover, define and shape LLM capabilities. Similar to how chips and computer architectures have co-evolved, language model and agent design should also develop a reciprocal path forward. # 8 Conclusion We proposed Cognitive Architectures for Language Agents (CoALA), a conceptual framework to systematically understand and build language agents. Our framework draws inspiration from the rich history of symbolic artificial intelligence and cognitive science, connecting decades-old insights to frontier research on large language models. We believe this approach provides a path towards developing more general and more human-like artificial intelligence.
2309.02427#64
2309.02427#66
2309.02427
[ "2305.14909" ]
2309.02427#66
Cognitive Architectures for Language Agents
# Acknowledgements We thank Harrison Chase, Baian Chen, Khanh Nguyen, Ofir Press, Noah Shinn, Jens Tuyls for proofreading and valuable feedback, and other members from the Princeton NLP Group and Princeton Computational Cognitive Science Lab for helpful discussions. SY and KN acknowledge support from an Oracle Collaborative Research award and the National Science Foundation under Grant No. 2239363. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. SY is also supported by the Harold W. Dodds Fellowship from Princeton. TS is supported by the National Defense Science and Engineering (NDSEG) Graduate Fellowship Program.
2309.02427#65
2309.02427#67
2309.02427
[ "2305.14909" ]
2309.02427#67
Cognitive Architectures for Language Agents
# References S. Adams, I. Arel, J. Bach, R. Coop, R. Furlan, B. Goertzel, J. S. Hall, A. Samsonovich, M. Scheutz, M. Schlesinger, et al. Mapping the landscape of human-level artificial general intelligence. AI magazine, 33 (1):25â 42, 2012. M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakrishnan, K. Hausman, et al.
2309.02427#66
2309.02427#68
2309.02427
[ "2305.14909" ]
2309.02427#68
Cognitive Architectures for Language Agents
Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â 23736, 2022. J. R. Anderson and C. Lebiere. The Newell test for a theory of cognition. Behavioral and Brain Sciences, 26 (5):587â
2309.02427#67
2309.02427#69
2309.02427
[ "2305.14909" ]
2309.02427#69
Cognitive Architectures for Language Agents
601, 2003. J. Andreas. Language models as agent models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5769â 5779, 2022. R. C. Atkinson and R. M. Shiffrin. Human memory: A proposed system and its control processes. In Psychology of Learning and Motivation, volume 2, pages 89â 195. Elsevier, 1968. A. D. Baddeley and G.
2309.02427#68
2309.02427#70
2309.02427
[ "2305.14909" ]
2309.02427#70
Cognitive Architectures for Language Agents
Hitch. Working memory. In Psychology of Learning and Motivation, volume 8, pages 47â 89. Elsevier, 1974. Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022.
2309.02427#69
2309.02427#71
2309.02427
[ "2305.14909" ]
2309.02427#71
Cognitive Architectures for Language Agents
Y. Bisk, D. Marcu, and W. Wong. Towards a dataset for human computer communication via grounded language acquisition. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence, 2016. 18 E. Biyik and M. Palan. Asking easy questions: A user-friendly approach to active reward learning. In Proceedings of the 3rd Conference on Robot Learning, 2019. C. Blundell, B. Uria, A. Pritzel, Y. Li, A. Ruderman, J. Z. Leibo, J. Rae, D. Wierstra, and D. Hassabis.
2309.02427#70
2309.02427#72
2309.02427
[ "2305.14909" ]
2309.02427#72
Cognitive Architectures for Language Agents
Model-free episodic control. arXiv preprint arXiv:1606.04460, 2016. S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driessche, J.-B. Lespiau, B. Damoc, A. Clark, et al. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, pages 2206â 2240, 2022.
2309.02427#71
2309.02427#73
2309.02427
[ "2305.14909" ]
2309.02427#73
Cognitive Architectures for Language Agents
S. Branavan, D. Silver, and R. Barzilay. Learning to win by reading manuals in a Monte-Carlo framework. Journal of Artificial Intelligence Research, 43:661â 704, 2012. M. Braverman, X. Chen, S. Kakade, K. Narasimhan, C. Zhang, and Y. Zhang. Calibration, entropy rates, and memory in language models. In International Conference on Machine Learning, pages 1089â
2309.02427#72
2309.02427#74
2309.02427
[ "2305.14909" ]
2309.02427#74
Cognitive Architectures for Language Agents
1099, 2020. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al. RT-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817, 2022. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al. RT-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al.
2309.02427#73
2309.02427#75
2309.02427
[ "2305.14909" ]
2309.02427#75
Cognitive Architectures for Language Agents
Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877â 1901, 2020. C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1â
2309.02427#74
2309.02427#76
2309.02427
[ "2305.14909" ]
2309.02427#76
Cognitive Architectures for Language Agents
43, 2012. F. Callaway, B. van Opheusden, S. Gul, P. Das, P. M. Krueger, T. L. Griffiths, and F. Lieder. Rational use of cognitive resources in human planning. Nature Human Behaviour, 6(8):1112â 1125, 2022. C.-M. Chan, W. Chen, Y. Su, J. Yu, W. Xue, S. Zhang, J. Fu, and Z. Liu.
2309.02427#75
2309.02427#77
2309.02427
[ "2305.14909" ]
2309.02427#77
Cognitive Architectures for Language Agents
Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201, 2023. B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler. Open- vocabulary queryable scene representations for real world planning. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11509â 11522, 2023a. D. Chen and R. Mooney.
2309.02427#76
2309.02427#78
2309.02427
[ "2305.14909" ]
2309.02427#78
Cognitive Architectures for Language Agents
Learning to interpret natural language navigation instructions from observations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 25, pages 859â 865, 2011. D. Chen, A. Fisch, J. Weston, and A. Bordes. Reading Wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051, 2017. M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al.
2309.02427#77
2309.02427#79
2309.02427
[ "2305.14909" ]
2309.02427#79
Cognitive Architectures for Language Agents
Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. X. Chen, M. Lin, N. Schärli, and D. Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023b. 19 Y. Chen, L. Yuan, G. Cui, Z. Liu, and H. Ji.
2309.02427#78
2309.02427#80
2309.02427
[ "2305.14909" ]
2309.02427#80
Cognitive Architectures for Language Agents
A close look into the calibration of pre-trained language models. arXiv preprint arXiv:2211.00151, 2022. N. Chomsky. Three models for the description of language. IRE Transactions on information theory, 2(3): 113â 124, 1956. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm:
2309.02427#79
2309.02427#81
2309.02427
[ "2305.14909" ]
2309.02427#81
Cognitive Architectures for Language Agents
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. A. Church. A set of postulates for the foundation of logic. Annals of mathematics, pages 346â 366, 1932. M.-A. Côté, A. Kádár, X. Yuan, B. Kybartas, T. Barnes, E. Fine, J. Moore, M. Hausknecht, L. El Asri, M. Adada, et al.
2309.02427#80
2309.02427#82
2309.02427
[ "2305.14909" ]
2309.02427#82
Cognitive Architectures for Language Agents
Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, pages 41â 75. Springer, 2019. A. Creswell, M. Shanahan, and I. Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. In The Eleventh International Conference on Learning Representations, 2023. G. Dagan, F. Keller, and A. Lascarides. Dynamic Planning with a LLM. arXiv preprint arXiv:2308.06391, 2023. I. Dasgupta, C. Kaeser-Chen, K. Marino, A. Ahuja, S. Babayan, F. Hill, and R. Fergus.
2309.02427#81
2309.02427#83
2309.02427
[ "2305.14909" ]
2309.02427#83
Cognitive Architectures for Language Agents
Collaborating with language models for embodied reasoning. In Second Workshop on Language and Reinforcement Learning, 2022. X. Deng, Y. Gu, B. Zheng, S. Chen, S. Stevens, B. Wang, H. Sun, and Y. Su. Mind2Web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023. N. Derbinsky, J. Li, and J. Laird.
2309.02427#82
2309.02427#84
2309.02427
[ "2305.14909" ]
2309.02427#84
Cognitive Architectures for Language Agents
A multi-domain evaluation of scaling in a general episodic memory. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 26, pages 193â 199, 2012. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1), 2019. D. Dohan, W. Xu, A. Lewkowycz, J. Austin, D. Bieber, R. G. Lopes, Y. Wu, H. Michalewski, R. A. Saurous, J. Sohl-Dickstein, et al. Language model cascades. arXiv preprint arXiv:2207.10342, 2022. D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al. PALM-E: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. Y. Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch.
2309.02427#83
2309.02427#85
2309.02427
[ "2305.14909" ]
2309.02427#85
Cognitive Architectures for Language Agents
Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. A. Ecoffet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune. Go-explore: a new approach for hard- exploration problems. arXiv preprint arXiv:1901.10995, 2019. K. Ellis, C. Wong, M. Nye, M. Sablé-Meyer, L. Morales, L. Hewitt, L. Cary, A. Solar-Lezama, and J. B. Tenenbaum. Dreamcoder:
2309.02427#84
2309.02427#86
2309.02427
[ "2305.14909" ]
2309.02427#86
Cognitive Architectures for Language Agents
Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, pages 835â 850, 2021. S. Feng, C. Y. Park, Y. Liu, and Y. Tsvetkov. From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair nlp models. arXiv preprint arXiv:2305.08283, 2023.
2309.02427#85
2309.02427#87
2309.02427
[ "2305.14909" ]
2309.02427#87
Cognitive Architectures for Language Agents
20 D. Ganguli, A. Askell, N. Schiefer, T. Liao, K. LukoÅ¡i¯utË e, A. Chen, A. Goldie, A. Mirhoseini, C. Olsson, D. Hernandez, et al. The capacity for moral self-correction in large language models. arXiv preprint arXiv:2302.07459, 2023. C. Gao, X. Lan, Z. Lu, J. Mao, J. Piao, H. Wang, D. Jin, and Y. Li. S3:
2309.02427#86
2309.02427#88
2309.02427
[ "2305.14909" ]
2309.02427#88
Cognitive Architectures for Language Agents
Social-network simulation system with large language model-empowered agents. arXiv preprint arXiv:2307.14984, 2023. T. Gao, A. Fisch, and D. Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020. S. J. Gershman, E. J. Horvitz, and J. B. Tenenbaum.
2309.02427#87
2309.02427#89
2309.02427
[ "2305.14909" ]
2309.02427#89
Cognitive Architectures for Language Agents
Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245):273â 278, 2015. T. L. Griffiths. Understanding human intelligence through human limitations. Trends in Cognitive Sciences, 24(11):873â 883, 2020. J. Gu, Y. Wang, K. Cho, and V. O. Li. Search engine guided neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. L. Guan, K. Valmeekam, S. Sreedharan, and S. Kambhampati.
2309.02427#88
2309.02427#90
2309.02427
[ "2305.14909" ]
2309.02427#90
Cognitive Architectures for Language Agents
Leveraging pre-trained large language models to construct and utilize world models for model-based task planning. arXiv preprint arXiv:2305.14909, 2023. Guidance. Guidance, 2023. URL https://github.com/guidance-ai/guidance. I. Gur, H. Furuta, A. Huang, M. Safdari, Y. Matsuo, D. Eck, and A. Faust.
2309.02427#89
2309.02427#91
2309.02427
[ "2305.14909" ]
2309.02427#91
Cognitive Architectures for Language Agents
A real-world webagent with planning, long context understanding, and program synthesis. arXiv preprint arXiv:2307.12856, 2023. K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929â 3938, 2020. J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, T. Pfaff, T. Weber, L. Buesing, and P. W. Battaglia.
2309.02427#90
2309.02427#92
2309.02427
[ "2305.14909" ]
2309.02427#92
Cognitive Architectures for Language Agents
Combining q-learning and search with amortized value estimates. In International Conference on Learning Representations, 2019. A. W. Hanjie, V. Zhong, and K. Narasimhan. Grounding language to entities and dynamics for generalization in reinforcement learning. In International Conference on Machine Learning (ICML), 2021. S. Hao, Y. Gu, H. Ma, J. J. Hong, Z. Wang, D. Z. Wang, and Z. Hu.
2309.02427#91
2309.02427#93
2309.02427
[ "2305.14909" ]
2309.02427#93
Cognitive Architectures for Language Agents
Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023. M. Hasan, C. Ozel, S. Potter, and E. Hoque. Sapien: Affective virtual agents powered by large language models. arXiv preprint arXiv:2308.03022, 2023. P. Haslum, N. Lipovetzky, D. Magazzeni, C. Muise, R. Brachman, F. Rossi, and P.
2309.02427#92
2309.02427#94
2309.02427
[ "2305.14909" ]
2309.02427#94
Cognitive Architectures for Language Agents
Stone. An introduction to the planning domain definition language, volume 13. Springer, 2019. M. Hausknecht, P. Ammanabrolu, M.-A. Côté, and X. Yuan. Interactive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7903â 7910, 2020. S. Hong, X. Zheng, J. Chen, Y. Cheng, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023.
2309.02427#93
2309.02427#95
2309.02427
[ "2305.14909" ]
2309.02427#95
Cognitive Architectures for Language Agents
J. Huang, S. S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu, and J. Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022a. S. Huang, Z. Jiang, H. Dong, Y. Qiao, P. Gao, and H. Li. Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model. arXiv preprint arXiv:2305.11176, 2023.
2309.02427#94
2309.02427#96
2309.02427
[ "2305.14909" ]
2309.02427#96
Cognitive Architectures for Language Agents
21 W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118â 9147, 2022b. W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, et al.
2309.02427#95
2309.02427#97
2309.02427
[ "2305.14909" ]
2309.02427#97
Cognitive Architectures for Language Agents
Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022c. A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1â 35, 2017. G. Irving, P. Christiano, and D. Amodei.
2309.02427#96
2309.02427#98
2309.02427
[ "2305.14909" ]
2309.02427#98
Cognitive Architectures for Language Agents
AI safety via debate. arXiv preprint arXiv:1805.00899, 2018. G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. Z. Jiang, J. Araki, H. Ding, and G. Neubig.
2309.02427#97
2309.02427#99
2309.02427
[ "2305.14909" ]
2309.02427#99
Cognitive Architectures for Language Agents
How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962â 977, 2021. Z. Jin, S. Levine, F. G. Adauto, O. Kamal, M. Sap, M. Sachan, R. Mihalcea, J. B. Tenenbaum, and B. Schölkopf. When to make exceptions:
2309.02427#98
2309.02427#100
2309.02427
[ "2305.14909" ]
2309.02427#100
Cognitive Architectures for Language Agents
Exploring language models as accounts of human moral judgment. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022. S. Jinxin, Z. Jiabao, W. Yilei, W. Xingjiao, L. Jiawen, and H. Liang. Cgmi: Configurable general multi-agent interaction framework. arXiv preprint arXiv:2308.12503, 2023. R. M. Jones, J. E. Laird, P. E. Nielsen, K. J. Coulter, P. Kenny, and F. V. Koss.
2309.02427#99
2309.02427#101
2309.02427
[ "2305.14909" ]
2309.02427#101
Cognitive Architectures for Language Agents
Automated intelligent pilots for combat flight simulation. AI magazine, 20(1):27â 27, 1999. D. Jurafsky. Speech & language processing. Pearson Education India, 2000. O. Khattab, K. Santhanam, X. L. Li, D. Hall, P. Liang, C. Potts, and M. Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv:2212.14024, 2022. URL https://github.com/stanfordnlp/dspy.
2309.02427#100
2309.02427#102
2309.02427
[ "2305.14909" ]
2309.02427#102
Cognitive Architectures for Language Agents
G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. J. R. Kirk and J. E. Laird. Interactive task learning for simple games. Advances in Cognitive Systems, 3 (13-30):5, 2014. J. R. Kirk, W. Robert, P. Lindes, and J. E. Laird. Improving Knowledge Extraction from LLMs for Robotic Task Learning through Agent Analysis. arXiv preprint arXiv:2306.06770, 2023. K. R. Koedinger, J. R. Anderson, W. H. Hadley, M. A. Mark, et al.
2309.02427#101
2309.02427#103
2309.02427
[ "2305.14909" ]
2309.02427#103
Cognitive Architectures for Language Agents
Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8(1):30â 43, 1997. T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems, 35:22199â 22213, 2022. I. Kotseruba and J. K. Tsotsos. 40 years of cognitive architectures: core cognitive abilities and practical applications. Artificial Intelligence Review, 53(1):17â 94, 2020. C. Laidlaw, S. Russell, and A.
2309.02427#102
2309.02427#104
2309.02427
[ "2305.14909" ]
2309.02427#104
Cognitive Architectures for Language Agents
Dragan. Bridging rl theory and practice with the effective horizon. arXiv preprint arXiv:2304.09853, 2023. J. E. Laird. The Soar cognitive architecture. MIT press, 2019. J. E. Laird. Introduction to Soar. arXiv preprint arXiv:2205.03854, 2022. 22 J. E. Laird, P. S. Rosenbloom, and A. Newell. Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1:11â
2309.02427#103
2309.02427#105
2309.02427
[ "2305.14909" ]
2309.02427#105
Cognitive Architectures for Language Agents
46, 1986. J. E. Laird, A. Newell, and P. S. Rosenbloom. Soar: An architecture for general intelligence. Artificial Intelligence, 33(1):1â 64, 1987. J. E. Laird, K. R. Kinkade, S. Mohan, and J. Z. Xu. Cognitive robotics using the Soar cognitive architecture. In CogRob @ AAAI, 2012. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman.
2309.02427#104
2309.02427#106
2309.02427
[ "2305.14909" ]
2309.02427#106
Cognitive Architectures for Language Agents
Building machines that learn and think like people, 2016. LangChain. LangChain, 2022. URL http://www.langchain.com. H. Le, Y. Wang, A. D. Gotmare, S. Savarese, and S. C. H. Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314â 21328, 2022. Y. LeCun.
2309.02427#105
2309.02427#107
2309.02427
[ "2305.14909" ]
2309.02427#107
Cognitive Architectures for Language Agents
A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62, 2022. P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33:9459â 9474, 2020. B. Z. Li, W. Chen, P. Sharma, and J. Andreas.
2309.02427#106
2309.02427#108
2309.02427
[ "2305.14909" ]
2309.02427#108
Cognitive Architectures for Language Agents
Lampp: Language models as probabilistic priors for perception and action. arXiv preprint arXiv:2302.02801, 2023a. H. Li, Y. Su, D. Cai, Y. Wang, and L. Liu. A survey on retrieval-augmented text generation. arXiv preprint arXiv:2202.01110, 2022a. R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy-Poirier, J. Monteiro, O. Shliazhko, N. Gontier, N. Meade, A. Zebaze, M.-H. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, R. Murthy, J. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Fahmy, U. Bhattacharyya, W. Yu, S. Singh, S. Luccioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding, C. Schlesinger, H. Schoelkopf, J. Ebert, T. Dao, M. Mishra, A. Gu, J. Robinson, C. J. Anderson, B. Dolan-Gavitt, D. Contractor, S. Reddy, D. Fried, D. Bahdanau, Y. Jernite, C. M. Ferrandis, S. M. Hughes, T. Wolf, A. Guha, L. von Werra, and H. de Vries.
2309.02427#107
2309.02427#109
2309.02427
[ "2305.14909" ]
2309.02427#109
Cognitive Architectures for Language Agents
Starcoder: may the source be with you! ArXiv, abs/2305.06161, 2023b. Y. Li, D. H. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, Tom, Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de, M. dâ Autume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal, Alexey, Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de, Freitas, K. Kavukcuoglu, and O. Vinyals.
2309.02427#108
2309.02427#110
2309.02427
[ "2305.14909" ]
2309.02427#110
Cognitive Architectures for Language Agents
Competition-level code generation with alphacode. Science, 378:1092 â 1097, 2022b. J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 9493â 9500, 2023a.
2309.02427#109
2309.02427#111
2309.02427
[ "2305.14909" ]
2309.02427#111
Cognitive Architectures for Language Agents
P. P. Liang, C. Wu, L.-P. Morency, and R. Salakhutdinov. Towards understanding and mitigating social biases in language models. In International Conference on Machine Learning, pages 6565â 6576, 2021. T. Liang, Z. He, W. Jiao, X. Wang, Y. Wang, R. Wang, Y. Yang, Z. Tu, and S. Shi.
2309.02427#110
2309.02427#112
2309.02427
[ "2305.14909" ]
2309.02427#112
Cognitive Architectures for Language Agents
Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118, 2023b. F. Lieder and T. L. Griffiths. Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43:e1, 2020. 23 B. Y. Lin, Y. Fu, K. Yang, P. Ammanabrolu, F. Brahman, S. Huang, C. Bhagavatula, Y. Choi, and X. Ren.
2309.02427#111
2309.02427#113
2309.02427
[ "2305.14909" ]
2309.02427#113
Cognitive Architectures for Language Agents
Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. arXiv preprint arXiv:2305.17390, 2023. P. Lindes and J. E. Laird. Toward integrating cognitive linguistics and cognitive language processing. In Proceedings of the 14th International Conference on Cognitive Modeling (ICCM), 2016. B. Liu, Y. Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P.
2309.02427#112
2309.02427#114
2309.02427
[ "2305.14909" ]
2309.02427#114
Cognitive Architectures for Language Agents
Stone. LLM+P: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023a. H. Liu, C. Sferrazza, and P. Abbeel. Languages are rewards: Hindsight finetuning using human feedback. arXiv preprint arXiv:2302.02676, 2023b. J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen.
2309.02427#113
2309.02427#115
2309.02427
[ "2305.14909" ]
2309.02427#115
Cognitive Architectures for Language Agents
What Makes Good In-Context Examples for GPT-3 ? arXiv preprint arXiv:2101.06804, 2021. P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), 2023c. ISSN 0360-0300. R. Liu, J. Wei, S. S. Gu, T.-Y. Wu, S. Vosoughi, C. Cui, D. Zhou, and A. M. Dai. Mindâ s eye:
2309.02427#114
2309.02427#116
2309.02427
[ "2305.14909" ]
2309.02427#116
Cognitive Architectures for Language Agents
Grounded language model reasoning through simulation. In The Eleventh International Conference on Learning Representations, 2023d. R. Liu, R. Yang, C. Jia, G. Zhang, D. Zhou, A. M. Dai, D. Yang, and S. Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023e. LlamaIndex. LlamaIndex, 2023. URL http://www.llamaindex.ai. L. E. Lwakatare, A. Raj, I. Crnkovic, J. Bosch, and H. H. Olsson.
2309.02427#115
2309.02427#117
2309.02427
[ "2305.14909" ]
2309.02427#117
Cognitive Architectures for Language Agents
Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions. Information and software technology, 127:106368, 2020. Z. Ma, Y. Mei, and Z. Su. Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. arXiv preprint arXiv:2307.15810, 2023. S. Macenski, T. Foote, B. Gerkey, C. Lalancette, and W. Woodall.
2309.02427#116
2309.02427#118
2309.02427
[ "2305.14909" ]
2309.02427#118
Cognitive Architectures for Language Agents
Robot operating system 2: Design, architecture, and uses in the wild. Science Robotics, 7(66):eabm6074, 2022. A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. A. A. Markov. The theory of algorithms. Trudy Matematicheskogo Instituta Imeni VA Steklova, 42:3â
2309.02427#117
2309.02427#119
2309.02427
[ "2305.14909" ]
2309.02427#119
Cognitive Architectures for Language Agents
375, 1954. M. G. Mattar and N. D. Daw. Prioritized memory access explains planning and hippocampal replay. Nature Neuroscience, 21(11):1609â 1617, 2018. J. L. McClelland, F. Hill, M. Rudolph, J. Baldridge, and H. Schütze. Extending machine language models toward human-level language understanding. arXiv preprint arXiv:1912.05877, 2019. J. Meier, R. Rao, R. Verkuil, J. Liu, T. Sercu, and A. Rives.
2309.02427#118
2309.02427#120
2309.02427
[ "2305.14909" ]
2309.02427#120
Cognitive Architectures for Language Agents
Language models enable zero-shot prediction of the effects of mutations on protein function. bioRxiv, 2021. G. Mialon, R. Dessì, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Rozière, T. Schick, J. Dwivedi- Yu, A. Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023.
2309.02427#119
2309.02427#121
2309.02427
[ "2305.14909" ]
2309.02427#121
Cognitive Architectures for Language Agents
S. Mohan and J. Laird. Learning goal-oriented hierarchical tasks from situated interactive instruction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28, 2014. 24 S. Mohan, A. H. Mininger, J. R. Kirk, and J. E. Laird. Acquiring grounded representations of words with situated interactive instruction. Advances in Cognitive Systems, 2:113â 130, 2012. R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. WebGPT: Browser-Assisted Question-Answering with Human Feedback. arXiv preprint arXiv:2112.09332, 2021. K. Narasimhan, R. Barzilay, and T. Jaakkola.
2309.02427#120
2309.02427#122
2309.02427
[ "2305.14909" ]
2309.02427#122
Cognitive Architectures for Language Agents
Deep transfer in reinforcement learning by language grounding. In Journal of Artificial Intelligence Research (JAIR), 2018. A. Narayan-Chen, P. Jayannavar, and J. Hockenmaier. Collaborative dialogue in Minecraft. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5405â 5415. Association for Computational Linguistics, 2019. S. Nason and J. E. Laird. Soar-RL: Integrating reinforcement learning with Soar. Cognitive Systems Research, 6(1):51â 59, 2005. A. Newell.
2309.02427#121
2309.02427#123
2309.02427
[ "2305.14909" ]
2309.02427#123
Cognitive Architectures for Language Agents
Studies in problem solving: Subject 3 on the crypt-arithmetic task DONALD+ GERALD= ROBERT. Technical report, Carnegie Mellon University, 1967. A. Newell. Physical symbol systems. Cognitive science, 4(2):135â 183, 1980. A. Newell. Précis of unified theories of cognition. Behavioral and Brain Sciences, 15(3):425â 437, 1992. A. Newell and H. A. Simon. Human problem solving. Prentice-Hall, 1972. A. Newell, P. S. Rosenbloom, and J. E. Laird.
2309.02427#122
2309.02427#124
2309.02427
[ "2305.14909" ]
2309.02427#124
Cognitive Architectures for Language Agents
Symbolic architectures for cognition. Foundations of cognitive science, pages 93â 131, 1989. K. Nguyen and H. Daumé III. Help, Anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning. arXiv preprint arXiv:1909.01871, 2019. K. Nguyen, D. Dey, C. Brockett, and B. Dolan. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12527â
2309.02427#123
2309.02427#125
2309.02427
[ "2305.14909" ]
2309.02427#125
Cognitive Architectures for Language Agents
12537, 2019. K. Nguyen, Y. Bisk, and H. Daumé III. A framework for learning to request rich and contextually useful information from humans. In ICML, July 2022a. K. X. Nguyen. Language models are bounded pragmatic speakers. In First Workshop on Theory of Mind in Communicating Agents, 2023. K. X. Nguyen, D. Misra, R. Schapire, M.
2309.02427#124
2309.02427#126
2309.02427
[ "2305.14909" ]
2309.02427#126
Cognitive Architectures for Language Agents
Dudà k, and P. Shafto. Interactive learning from activity description. In International Conference on Machine Learning, pages 8096â 8108, 2021. K. X. Nguyen, Y. Bisk, and H. D. Iii. A framework for learning to request rich and contextually useful information from humans. In International Conference on Machine Learning, pages 16553â 16568, 2022b. T. T. Nguyen, T. T. Huynh, P. L. Nguyen, A. W.-C. Liew, H. Yin, and Q. V. H. Nguyen.
2309.02427#125
2309.02427#127
2309.02427
[ "2305.14909" ]
2309.02427#127
Cognitive Architectures for Language Agents
A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022c. A. Ni, S. Iyer, D. Radev, V. Stoyanov, W.-t. Yih, S. Wang, and X. V. Lin. Lever: Learning to verify language-to-code generation with execution. In International Conference on Machine Learning, pages 26106â 26128, 2023. N. J. Nilsson. Shakey the robot. Technical Note, 1984. R. Nogueira, W. Yang, J. Lin, and K. Cho.
2309.02427#126
2309.02427#128
2309.02427
[ "2305.14909" ]
2309.02427#128
Cognitive Architectures for Language Agents
Document expansion by query prediction, 2019. A. M. Nuxoll and J. E. Laird. Extending cognitive architecture with episodic memory. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1560â 1564, 2007. 25 M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan, A. Lewkowycz, M. Bosma, D. Luan, et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021. OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023a. OpenAI. Function calling and other API updates, 2023b. URL https://openai.com/blog/ function-calling-and-other-api-updates.
2309.02427#127
2309.02427#129
2309.02427
[ "2305.14909" ]
2309.02427#129
Cognitive Architectures for Language Agents
L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â 27744, 2022. A. Padmakumar, J. Thomason, A. Shrivastava, P. Lange, A. Narayan-Chen, S. Gella, R. Piramuthu, G. Tur, and D. Hakkani-Tur. Teach: Task-driven embodied agents that chat.
2309.02427#128
2309.02427#130
2309.02427
[ "2305.14909" ]
2309.02427#130
Cognitive Architectures for Language Agents
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2017â 2025, 2022. N. D. Palo, A. Byravan, L. Hasenclever, M. Wulfmeier, N. Heess, and M. Riedmiller. Towards a unified agent with foundation models. In Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023. A. Parisi, Y. Zhao, and N. Fiedel. Talm:
2309.02427#129
2309.02427#131
2309.02427
[ "2305.14909" ]
2309.02427#131
Cognitive Architectures for Language Agents
Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022. J. S. Park, J. C. Oâ Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. P. Pataranutaporn, V. Danry, J. Leong, P. Punpongsanon, D. Novy, P. Maes, and M. Sra.
2309.02427#130
2309.02427#132
2309.02427
[ "2305.14909" ]
2309.02427#132
Cognitive Architectures for Language Agents
AI-generated characters for supporting personalized learning and well-being. Nature Machine Intelligence, 3(12):1013â 1022, 2021. A. Peng, I. Sucholutsky, B. Li, T. R. Sumers, T. L. Griffiths, J. Andreas, and J. A. Shah. Language guided state abstractions. In Workshop on Social Intelligence in Humans and Robots at RSS 2023, 2023. E. L. Post.
2309.02427#131
2309.02427#133
2309.02427
[ "2305.14909" ]
2309.02427#133
Cognitive Architectures for Language Agents
Formal reductions of the general combinatorial decision problem. American Journal of Mathematics, 65(2):197â 215, 1943. A. Pritzel, B. Uria, S. Srinivasan, A. P. Badia, O. Vinyals, D. Hassabis, D. Wierstra, and C. Blundell. Neural episodic control. In International conference on machine learning, pages 2827â 2836, 2017. M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. C. Qian, X. Cong, C. Yang, W. Chen, Y. Su, J. Xu, Z. Liu, and M. Sun.
2309.02427#132
2309.02427#134
2309.02427
[ "2305.14909" ]
2309.02427#134
Cognitive Architectures for Language Agents
Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023. Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, X. Tang, B. Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023. M.
2309.02427#133
2309.02427#135
2309.02427
[ "2305.14909" ]
2309.02427#135
Cognitive Architectures for Language Agents
Quigley. Ros: an open-source robot operating system. In IEEE International Conference on Robotics and Automation, 2009. URL https://api.semanticscholar.org/CorpusID:6324125. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. A. Z. Ren, A. Dixit, A. Bodrova, S. Singh, S. Tu, N. Brown, P. Xu, L. Takayama, F. Xia, Z. Xu, et al.
2309.02427#134
2309.02427#136
2309.02427
[ "2305.14909" ]
2309.02427#136
Cognitive Architectures for Language Agents
Robots that ask for help: Uncertainty alignment for large language model planners. In 7th Annual Conference on Robot Learning, 2023. O. J. Romero, J. Zimmerman, A. Steinfeld, and A. Tomasic. Synergistic integration of large language models and cognitive architectures for robust ai: An exploratory analysis. arXiv preprint arXiv:2308.09830, 2023. 26 B. Rozière, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. Tan, Y. Adi, J. Liu, T. Remez, J. Rapin, A. Kozhevnikov, I. Evtimov, J. Bitton, M. P. Bhatt, C. C. Ferrer, A. Grattafiori, W. Xiong, A.
2309.02427#135
2309.02427#137
2309.02427
[ "2305.14909" ]
2309.02427#137
Cognitive Architectures for Language Agents
Dâ efossez, J. Copet, F. Azhar, H. Touvron, L. Martin, N. Usunier, T. Scialom, and G. Synnaeve. Code llama: Open foundation models for code. ArXiv, abs/2308.12950, 2023. O. Rubin, J. Herzig, and J. Berant. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633, 2021.
2309.02427#136
2309.02427#138
2309.02427
[ "2305.14909" ]
2309.02427#138
Cognitive Architectures for Language Agents
E. Russek, D. Acosta-Kane, B. van Opheusden, M. G. Mattar, and T. Griffiths. Time spent thinking in online chess reflects the value of computation. PsyArXiv, 2022. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education Limited London, 2013. D. Sadigh, A. D. Dragan, S. Sastry, and S. A. Seshia.
2309.02427#137
2309.02427#139
2309.02427
[ "2305.14909" ]
2309.02427#139
Cognitive Architectures for Language Agents
Active preference-based learning of reward functions. In N. M. Amato, S. S. Srinivasa, N. Ayanian, and S. Kuindersma, editors, Robotics: Science and Systems XIII, 2017. W. Saunders, C. Yeh, J. Wu, S. Bills, L. Ouyang, J. Ward, and J. Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802, 2022.
2309.02427#138
2309.02427#140
2309.02427
[ "2305.14909" ]
2309.02427#140
Cognitive Architectures for Language Agents
T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, and M. Young.
2309.02427#139
2309.02427#141
2309.02427
[ "2305.14909" ]
2309.02427#141
Cognitive Architectures for Language Agents
Machine Learning: The High Interest Credit Card of Technical Debt. In SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), 2014. T. Shi, A. Karpathy, L. Fan, J. Hernandez, and P. Liang. World of Bits: An Open-Domain platform for web-based agents. In International Conference on Machine Learning, pages 3135â 3144, 2017. N. Shinn, F. Cassano, B. Labash, A. Gopinath, K. Narasimhan, and S.
2309.02427#140
2309.02427#142
2309.02427
[ "2305.14909" ]
2309.02427#142
Cognitive Architectures for Language Agents
Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. M. Shridhar, X. Yuan, M.-A. Côté, Y. Bisk, A. Trischler, and M. Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020. T. Silver, V. Hariprasad, R. S. Shuttleworth, N. Kumar, T. Lozano-Pérez, and L. P. Kaelbling. Pddl planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022.
2309.02427#141
2309.02427#143
2309.02427
[ "2305.14909" ]
2309.02427#143
Cognitive Architectures for Language Agents
T. Silver, S. Dan, K. Srinivas, J. B. Tenenbaum, L. P. Kaelbling, and M. Katz. Generalized Planning in PDDL Domains with Pretrained Large Language Models. arXiv preprint arXiv:2305.11014, 2023. I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523â 11530, 2023.
2309.02427#142
2309.02427#144
2309.02427
[ "2305.14909" ]
2309.02427#144
Cognitive Architectures for Language Agents
T. Sumers, R. Hawkins, M. K. Ho, T. Griffiths, and D. Hadfield-Menell. How to talk so AI will learn: Instructions, descriptions, and autonomy. Advances in Neural Information Processing Systems, 35:34762â 34775, 2022. T. Sumers, K. Marino, A. Ahuja, R. Fergus, and I. Dasgupta. Distilling internet-scale vision-language models into embodied agents. In Proceedings of the 40th International Conference on Machine Learning, pages 32797â
2309.02427#143
2309.02427#145
2309.02427
[ "2305.14909" ]
2309.02427#145
Cognitive Architectures for Language Agents
32818, 2023. T. R. Sumers, M. K. Ho, R. D. Hawkins, K. Narasimhan, and T. L. Griffiths. Learning rewards from linguistic feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6002â 6010, 2021. 27 R. Sun. Desiderata for cognitive architectures. Philosophical Psychology, 17(3):341â 373, 2004. R. S. Sutton and A. G. Barto.
2309.02427#144
2309.02427#146
2309.02427
[ "2305.14909" ]
2309.02427#146
Cognitive Architectures for Language Agents
Reinforcement learning: An introduction. MIT press, 2018. O. Tafjord, B. Dalvi, and P. Clark. Proofwriter: Generating implications, proofs, and abductive statements over natural language. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3621â 3634, 2021. R. Tamari, C. Shani, T. Hope, M. R. L. Petruck, O. Abend, and D. Shahaf.
2309.02427#145
2309.02427#147
2309.02427
[ "2305.14909" ]
2309.02427#147
Cognitive Architectures for Language Agents
Language (re)modelling: Towards embodied language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6268â 6281, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.559. M. Tambe, W. L. Johnson, R. M. Jones, F. Koss, J. E. Laird, P. S. Rosenbloom, and K. Schwamb. Intelligent agents for interactive simulation environments. AI magazine, 16(1):15â 15, 1995. M. Tang, S. Yao, J. Yang, and K. Narasimhan.
2309.02427#146
2309.02427#148
2309.02427
[ "2305.14909" ]
2309.02427#148
Cognitive Architectures for Language Agents
Referral augmentation for zero-shot information retrieval, 2023a. Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, and L. Sun. ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases. arXiv preprint arXiv:2306.05301, 2023b. S. Tellex, T. Kollar, S. Dickerson, M. Walter, A. Banerjee, S. Teller, and N. Roy.
2309.02427#147
2309.02427#149
2309.02427
[ "2305.14909" ]
2309.02427#149
Cognitive Architectures for Language Agents
Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 25, pages 1507â 1514, 2011. J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer. Vision-and-dialog navigation. In Conference on Robot Learning, pages 394â 406. PMLR, 2020. A. M. Turing et al. On computable numbers, with an application to the entscheidungsproblem. J. of Math, 58(345-363):5, 1936. J. Tuyls, S. Yao, S. Kakade, and K. Narasimhan.
2309.02427#148
2309.02427#150
2309.02427
[ "2305.14909" ]
2309.02427#150
Cognitive Architectures for Language Agents
Multi-stage episodic control for strategic exploration in text games. arXiv preprint arXiv:2201.01251, 2022. K. Valmeekam, A. Olmo, S. Sreedharan, and S. Kambhampati. Large language models still canâ t plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498, 2022. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å . Kaiser, and I.
2309.02427#149
2309.02427#151
2309.02427
[ "2305.14909" ]
2309.02427#151
Cognitive Architectures for Language Agents
Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017. G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a. L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, and J.-R. Wen.
2309.02427#150
2309.02427#152
2309.02427
[ "2305.14909" ]
2309.02427#152
Cognitive Architectures for Language Agents
A survey on large language model based autonomous agents, 2023b. L. Wang, N. Yang, and F. Wei. Query2doc: Query expansion with large language models. arXiv preprint arXiv:2303.07678, 2023c. R. Wang, P. Jansen, M.-A. Côté, and P. Ammanabrolu. Scienceworld: Is your agent smarter than a 5th grader? arXiv preprint arXiv:2203.07540, 2022a. S. I. Wang, P. Liang, and C. D. Manning.
2309.02427#151
2309.02427#153
2309.02427
[ "2305.14909" ]
2309.02427#153
Cognitive Architectures for Language Agents
Learning language games through interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2368â 2378, 2016. X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, and D. Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022b.
2309.02427#152
2309.02427#154
2309.02427
[ "2305.14909" ]
2309.02427#154
Cognitive Architectures for Language Agents
28 J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022a.
2309.02427#153
2309.02427#155
2309.02427
[ "2305.14909" ]
2309.02427#155
Cognitive Architectures for Language Agents
ISSN 2835-8856. Survey Certification. J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022b. L. Weng. Llm-powered autonomous agents. github.io/posts/2023-06-23-agent/. lilianweng.github.io, Jun 2023. URL https://lilianweng. J. Weston, S. Chopra, and A. Bordes.
2309.02427#154
2309.02427#156
2309.02427
[ "2305.14909" ]
2309.02427#156
Cognitive Architectures for Language Agents
Memory networks. arXiv preprint arXiv:1410.3916, 2014. A. N. Whitehead and B. Russell. Principia mathematica to* 56, volume 2. Cambridge University Press, 1997. D. E. Wilkins. Practical planning: extending the classical AI planning paradigm. Elsevier, 2014. T. Winograd. Understanding natural language. Cognitive psychology, 3(1):1â
2309.02427#155
2309.02427#157
2309.02427
[ "2305.14909" ]