id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.04026#22
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
7 OpenAI. 2023. Gpt-4 technical report. Joon Sung Park, Joseph C. Oâ Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive sim- ulacra of human behavior. Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2022. Social simulacra: Creating popu- lated prototypes for social computing systems. David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences, 1(4):515â
2308.04026#21
2308.04026#23
2308.04026
[ "2009.03300" ]
2308.04026#23
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
526. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. 2023. Communicative agents for software de- velopment. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng, and Minlie Huang. 2023. Safety assessment of chinese large language models. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and An- ima Anandkumar. 2023a.
2308.04026#22
2308.04026#24
2308.04026
[ "2009.03300" ]
2308.04026#24
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Voyager: An open-ended embodied agent with large language models. Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023b. Is chatgpt a good nlg evaluator? a preliminary study. Lilian Weng. 2023. Llm-powered autonomous agents. lilianweng.github.io. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models.
2308.04026#23
2308.04026#25
2308.04026
[ "2009.03300" ]
2308.04026#25
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023.
2308.04026#24
2308.04026#26
2308.04026
[ "2009.03300" ]
2308.04026#26
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Lima: Less is more for alignment.
2308.04026#25
2308.04026
[ "2009.03300" ]
2308.03983#0
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
3 2 0 2 g u A 8 ] L C . s c [ 1 v 3 8 9 3 0 . 8 0 3 2 : v i X r a # SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi Kioxia Corporation, Japan [email protected] # Abstract
2308.03983#1
2308.03983
[ "2302.13971" ]
2308.03983#1
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Genera- tive AI systems have seen significant progress in recent years. Integrating a knowledge re- trieval architecture allows for seamless inte- gration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direc- tion that explicitly separates roles of LLMs and retrievers in context interpretation and knowl- edge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly in- terface to these sophisticated advancements to the machine learning community. SimplyRe- trieve features a GUI and API based RCG plat- form, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can ex- plore the potential of RCG for improving gen- erative AI performance while maintaining pri- vacy standards. The tool is available at https: //github.com/RCGAI/SimplyRetrieve with an MIT license.
2308.03983#0
2308.03983#2
2308.03983
[ "2302.13971" ]
2308.03983#2
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
1 # 1 Introduction Generative Large Dense Language Retriever Model Context Knowledge Interpretation Memorization Degree of Role Separation Looe Retrieval- Retrieval- Augmented Centric Retrieval- Retrieval- Augmented Centric Figure 1: Retrieval-Centric Generation (RCG) approach presents an innovative concept that leverages the mutu- ally beneficial interaction between LLMs and retrievers for more efficient context interpretation and knowledge memorization. Increased clarity in role-separation be- tween context interpretation and knowledge memoriza- tion can potentially boost the performance of generative AI systems. effective in adapting these models to specific do- mains for various tasks (Brown et al., 2020). This has given rise to the field of prompt-engineering. Additionally, Chain-of-Thought (Wei et al., 2022b; Kojima et al., 2022) decomposes a complex task assigned into manageable steps, thereby expand- ing the capabilities of generative-based language models even further. Generative-based Natural Language Processing (NLP) has witnessed significant progress (Brown et al., 2020) in recent years. With the introduction of Transformer (Vaswani et al., 2017) architecture, the possibility of developing high-accuracy lan- guage models that can perform tasks such as text generation, text summarization and language trans- lation has become a reality. These models (Brown et al., 2020; Chowdhery et al., 2022), when scaled up to billions of parameters (Wei et al., 2022a), have shown remarkable improvements in text gen- eration tasks such as zero-shot inference, popu- larized the term Generative AI. Instead of model fine-tuning, careful design of prompts has proven Training large language models (LLMs) requires immense computational resources, often involv- ing thousands of high-end GPUs. Fine-tuning these models can also be challenging. Although prompt-engineering helped to reduce the need for fine-tuning, there was still noticeable instruction misalignment when interacting with a human user. To address this issue, techniques such as rein- forcement learning from human feedback (RLHF) (Christiano et al., 2017) have been explored to align the behavior of LLMs with human values (Ouyang et al., 2022; OpenAI, 2023).
2308.03983#1
2308.03983#3
2308.03983
[ "2302.13971" ]
2308.03983#3
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Additionally, QLoRA (Dettmers et al., 2023), combining low-rank adap- tation technique (Hu et al., 2022) and quantization technique, has made it possible to fine-tune these models on individual developerâ s hardware, mak- ing them more accessible to a wider range of users. Despite these advances, there are still limitations to the capacity of LLMs, and they do not inher- ently recognize information that was not present during training and fine-tuning. Memorization of factual knowledge in the long tail is also a chal- lenge (Mallen et al., 2023). Most recently, there has been growing interest in integrating external knowledge sources into LLMs for generating text (Borgeaud et al., 2022; Guu et al., 2020; Lewis et al., 2020). Similar approaches have also been proposed in solving computer vi- sion tasks (Nakata et al., 2022; Iscen et al., 2023). Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) architecture is an approach that en- hances the capabilities of LLMs by incorporating external data sources using a sparse or dense re- triever (Karpukhin et al., 2020), enabling the use of privately owned data without requiring retraining or fine-tuning the LLM (Chase, 2022). However, developing retrieval-augmented LLM-based gen- erative models is still in its early stages. Our pro- posed tool can help facilitate these developments. Additionally, we introduce a new architec- tural concept called Retrieval-Centric Genera- tion (RCG), which builds upon the Retrieval- Augmented Generation approach by emphasizing the crucial role of the LLM in interpreting context and entrusting knowledge memorization to the re- triever component, putting greater importance on retriever, as depicted in Figure 1. By separating context interpretation from knowledge memoriza- tion, this approach has the potential to reduce the scale (Carlini et al., 2023) of the LLM required for generative tasks, leading to more efficient and inter- pretable results. Moreover, this approach may help mitigate hallucinations (Maynez et al., 2020) by limiting the scope of the LLMâ
2308.03983#2
2308.03983#4
2308.03983
[ "2302.13971" ]
2308.03983#4
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
s generation. Once we define RCG as above, we can re-define RAG that enables more permissible usage of LLMâ s in- herent knowledge, whereas RCG prioritizes clear demarcations between context interpretation and knowledge memorization. SimplyRetrieve is an open-source tool aimed at providing a localized, lightweight, and user- friendly interface to Retrieval-Centric Generation approach to the machine learning community. This tool encompasses a GUI and API based RCG plat- form, assisted by a Private Knowledge Base Con- structors and a Retrieval Tuning Module. Sim- plyRetrieve is designed to be simple and acces- sible to the community, as well as end-users. Our retrieval-centric platform incorporates multi- ple selectable knowledge bases featuring Mixtures- of-Knowledge-Bases (MoKB) mode and Explicit Prompt-Weighting (EPW) of retrieved knowledge base. By designing SimplyRetrieve with these features, we enable the machine learning commu- nity to explore and develop with a lightweight, private data interface to LLM-based generative AI systems, with a focus on retrieval-centric gen- eration. Potential developments that can be ex- plored using this tool include: (1) examining the effectiveness of retrieval-centric generation in de- veloping safer, more interpretable, and responsi- ble AI systems; (2) optimizing the efficiency of separating context interpretation and knowledge memorization within retrieval-centric generation approach; and (3) improving prompt-engineering techniques for retrieval-centric generation. Sim- plyRetrieve is available at https://github.com/ RCGAI/SimplyRetrieve. Our contributions can be summarized as follows:
2308.03983#3
2308.03983#5
2308.03983
[ "2302.13971" ]
2308.03983#5
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
⠢ We propose SimplyRetrieve, an innovative and user-friendly tool that leverages GUI and API platform to facilitate a Retrieval-Centric Generation approach. This platform is further strengthened by two key components: Private Knowledge Base Constructor and Retrieval Tuning Module. ⠢ We open sourced our tool to the machine learn- ing community and identify potential develop- ment directions of Retrieval-Centric Genera- tion. # 2 Related Works The emergence of Retrieval-Augmented Genera- tion architecture has spurred the development of numerous open-source tools. The ChatGPT Re- trieval Plugin1, for instance, integrates the ability to retrieve and enhance personal or organizational documents into the widely used ChatGPT model (OpenAI, 2023). Similarly, fastRAG (Izsak et al., 2023) provides a streamlined platform for con- structing efficient retrieval-augmented generation 1https://github.com/openai/ chatgpt-retrieval-plugin Retrieval Tuning Module RCG Tuning Prompt Knowledge Base a) Generative Large Language Retriever ANNS «= Knowledge Base based eS Knowledge Base «= Knowledge Base MoKB: Mixture-of-Knowledge-Base EPW: Explicit Prompt-Weighting of Knowledge Base Figure 2: SimplyRetrieve is an open-source tool that provides a localized, lightweight, and user-friendly interface to the Retrieval-Centric Generation approach for the machine learning community. This tool features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module.
2308.03983#4
2308.03983#6
2308.03983
[ "2302.13971" ]
2308.03983#6
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
pipelines. Additionally, LangChain (Chase, 2022) offers a comprehensive generative chat AI library featuring agents, data augmentation, and mem- ory capabilities. Finally, Haystack (Pietsch et al., 2019) presents an all-encompassing NLP frame- work supporting question answering, answer gen- eration, semantic document search, and retrieval- augmentation. Both LangChain and Haystack em- ploy agent-based pipelining techniques and can process complex queries. However, this complex- ity may hinder the explainability of LLMs, mak- ing it challenging to interpret their performance in retrieval-augmented settings. On the other hand, our work offers a lightweight and transparent approach to implementing so- phisticated retrieval-centric, as well as retrieval- augmented architecture, while maintaining a strong emphasis on response interpretability and wider accessibility to the community. Unlike previous works such as PrivateGPT (PrivateGPT), which provides a privacy-preserving chat AI tool but lacks customization options and analytical capabilities, our tool offers a comprehensive set of features for tailoring and analyzing retrieval-centric generation. Furthermore, to the best of our knowledge, we are the first to introduce RCG concept and show initial experiments of it using our tool. # 3 Tool Design SimplyRetrieve is designed to deploy RCG pipeline: construct knowledge base, tune archi- tecture, make predictions. In this paper, we fo- cus on describing the core specifications of the tool. For details about the setup procedures, refer to the repository of https://github.com/RCGAI/ SimplyRetrieve. # 3.1 GUI and API based Retrieval-Centric Generation Platform As shown in Figure 2, there are two dense models in our tool: an LLM and an Approximate Near- est Neighbor Search (ANNS) based Knowledge Retriever. The LLM can be any one of the off- the-shelf open-source LLM models available in Hugging Face (Wolf et al., 2020), ranging from 1B to more than 100B-scale in parameters such as Touvron et al. (2023a,b). The Knowledge Retriever employs a dense retriever that is compatible with various embedding models available in Hugging Face. Additionally, our tool allows integration of multiple knowledge bases simultaneously, enabling user-selectable knowledge bases depending on the specific use case.
2308.03983#5
2308.03983#7
2308.03983
[ "2302.13971" ]
2308.03983#7
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
In terms of the GUI, we have designed a sim- ple yet intuitive layout using Gradio (Abid et al., 2019), which provides a familiar streaming chatbot interface with user control for managing the run- ning modes of the retriever, engineering prompts, and configuring the tool. As depicted in Figure 3, our GUI features a comprehensive retrieval-centric tuning panel for functions including manual knowl- edge base selection from multiple sources and Mixture-of-Knowledge-Base modes. Moreover, we employ Explicit Prompt-Weighting of retrieval to adjust the level of influence exerted by the retriever. To ensure seamless integration, we also developed a comprehensive API access function using the Gradio Client Interface, and we allow multi-user Chatbot Functional Tabs | Whatis the purpose of establishing KIOXIA lwate Corporation? Streaming Chatbot Interface The purpose of establishing KIOXIA Iwate Corporation is to meet the growing demand for flash memory through advanced manufacturing processes utilizing Al Retrieval-Centric Tuning Panel Use KnowledgeBase KnowledgeBase Mode Selectable KnowledgeBase KnowledgeBase Kioxia Expert Prompt Weighting KnowledgeBase Weightag Figure 3: The GUI design of SimplyRetrieve features four primary tabs. The Chat tab serves as the central query and response interface with retrieval-centric tuning panel. The Prompt tab provides an intuitive editor for modifying, updating, and saving prompts used by the AI. The Config tab enables users to customize various tool settings and save their preferences. Finally, the Analysis tab offers a comprehensive analytics platform for analyzing and logging data related to SimplyRetrieveâ s performance and usage. concurrent access to both UIs, leveraging Gradioâ s queue functionality to manage requests efficiently. The retrieval-centric tuning panel enables lightweight and simplistic access to RCG. By using the manual knowledge base selection mode, users can construct and import multiple private knowl- edge bases simultaneously into this tool. The abil- ity to select the most relevant knowledge base for each task allows users to maintain control over the selection process while avoiding any unexpected outcomes. Our MoKB mode enables automatic se- lection of the most suitable knowledge base based on the similarity between the query and knowledge base functional descriptions. We use semantic co- sine similarity of embedding space to calculate these scores, providing an efficient and lightweight approach to knowledge base auto-selection. By updating the functional descriptions in the configu- ration file, users can further enhance the accuracy of the selection algorithm.
2308.03983#6
2308.03983#8
2308.03983
[ "2302.13971" ]
2308.03983#8
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
and leave it for future work. # 3.2 Private Knowledge Base Constructor Our Retrieval-Centric Generation Platform is as- sisted by a Private Knowledge Base Constructor that creates a local and personalized knowledge base using the userâ s documents. This construc- tor employs a scalable documents loader that can handle large volumes of documents by chunking and streaming the loading, splitting and knowledge base creation processes, allowing for efficient doc- ument processing. The constructor supports var- ious document formats such as PDF, TXT, DOC, DOCX, PPT, PPTX, HTML, MD, CSV, among others, and can be easily expanded by editing con- figuration file. Additionally, the length of passages in the documents splitting function is easily config- urable to meet specific requirements. Additionally, our Explicit Prompt-Weighting fea- ture allows manual adjustment of the degree of influence of retrievers on the language model, en- abling customized control over the balance between retriever and LLM. Through prompt-engineering or token weight adjustment, users can adapt the tool to their specific needs, ensuring optimal performance. SimplyRetrieve has incorporated Explicit Prompt- Weighting through prompt-engineering, where the weightage can be adjusted to fine-tune the percent- age of knowledge tokens to be used in the prompt out of retrieved tokens. However, we have not im- plemented token weight adjustment in this study After generating the sources for the knowledge base, we use a dense encoder to convert the text into numerical embeddings that can be used for semantic search and retrieve. To accommodate large-scale knowledge bases, we utilize ANNS for efficient semantic retrieval. By default, our tool employs the Hierarchical Navigable Small Worlds (HNSW) (Malkov and Yashunin, 2020) algorithm, but we also provide support for flat indexing and the IVFPQ-HNSW method, which combines in- verted file indexing with product quantization and HNSW course quantizers. The Index Constructor function automatically creates the required index files for semantic searching. We implement our indexing function by using Faiss library (Johnson
2308.03983#7
2308.03983#9
2308.03983
[ "2302.13971" ]
2308.03983#9
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
et al., 2019). # 3.3 Retrieval Tuning Module The Retrieval Tuning Module of our tool includes three key functionalities: prompt-engineering, tool configuration, and analysis and data logging. The prompt-engineering functionality allows users to easily edit, update, and save retrieval-related prompts using a user-friendly Prompt Tab within our GUI. Available prompts are AI Prefix, Retriever Prefix, Retriever Suffix, Model Prefix and Model Suffix. The configuration functionality enables users to modify and save all configurable settings via the Config Tab within our GUI. Finally, the anal- ysis and data logging functionality collects and dis- plays retrieval-related analysis data, including re- trieved knowledge base, query, response, sentence- level and token-level similarity scores, in the Anal- ysis Tab of our GUI. Similarity scores are calcu- lated based on both semantic cosine similarity of sentence-to-sentence embeddings and all-token-to- token embeddings. This approach allows us to capture both local and global similarities between sentences, leading to more accurate assessments of their comparability. Additionally, users can save all logged data to a log file for further analysis. GUI designs are depicted in Figure 4, 5 and 6 of Ap- pendix A.2. To deploy an end-user mode, users can simply disable the update functions in the Retrieval Tuning Module through command-line options. # 4 Evaluations In this section, we perform several qualitative eval- uations to demonstrate the usability and behavior of our tool. We construct our knowledge base using the most recent information available on the web- site of an organization2. We utilize the models pub- licly available on Hugging Face, Wizard-Vicuna- 13B3 (Xu et al., 2023; Chiang et al., 2023) as the LLM and Multilingual-E5-base4 (Wang et al., 2022) as the encoder for our evaluations, unless specified otherwise. We load both models into a single Nvidia A100 GPU in 8-bit INT8 mode for lower memory usage and higher inference speed. We set temperature of LLM to 0. We utilize HNSW for indexing of knowledge bases and set the num- ber of passages retrieved to 5.
2308.03983#8
2308.03983#10
2308.03983
[ "2302.13971" ]
2308.03983#10
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
2https://www.kioxia.com/en-jp/top.html 3https://huggingface.co/ehartford/ Wizard-Vicuna-13B-Uncensored 4https://huggingface.co/intfloat/ multilingual-e5-base # 4.1 Qualitative Evaluation We compare the results of three approaches: Retrieval-Centric Generation (RCG), Retrieval- Augmented Generation (RAG), and Retrieval-OFF Generation (ROG). Note that in this work, we de- fine RAG as allowing more permissible integra- tion of LLMâ s inherent and externally retrieved knowledge, whereas RCG prioritizes clear demar- cations between context interpretation and knowl- edge memorization. Investigating advanced meth- ods in extracting RCG behavior is a promising research topic. In this work, we conduct simple experiments using prompt-engineering technique to reveal the potential of RCG approach. Specifi- cally, for RCG, we employ a retrieval suffix prompt that reads â answer the following question with the provided knowledge.â For RAG, we use a less constraining prompt that reads â answer the follow- ing question. You may use the provided knowl- edge.â
2308.03983#9
2308.03983#11
2308.03983
[ "2302.13971" ]
2308.03983#11
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Finally, for ROG, which is the baseline in our evaluation, we disable the retriever altogether. By using the prompt-engineering function in Sim- plyRetrieve, we can easily perform these experi- ments. Queries are human annotated. In Table 1 and 2, blue-colored texts indicates factually correct responses while red-colored texts indicates halluci- nations. ApproachResponse RCG Kioxia has factories located in both Yokkaichi city, Mie prefecture and Kitakami city, Iwate pre- fecture in Japan. Kioxia has factories located in various regions across the globe including Japan, Thailand, Malaysia, China, and the United States. However, its main production site is located in Yokkaichi City, Mie Prefecture, Japan. Our main production site for HDD and SSD prod- ucts is located in Yamagata, Japan. We also have manufacturing sites in Thailand and the Philip- pines that produce optical disc media such as Blu- ray Discs and DVDs.
2308.03983#10
2308.03983#12
2308.03983
[ "2302.13971" ]
2308.03983#12
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
RAG ROG Table 1: Qualitative comparison between Retrieval- Centric, Retrieval-Augmented and Retrieval-OFF ap- proaches using prompt-engineering in SimplyRetrieve. Based on the results in Table 1, we observed that RCG successfully answered our query accu- rately, whereas RAG attempted to integrate the retrieved knowledge base with LLMâ s own knowl- edge of the specific organization, resulting in par- tially erroneous information. Meanwhile, ROG re- sponded to our query with LLMâ s preexisting mis- taken knowledge, generating falsehoods. Our find- ings suggest that crafting clear and direct prompts, such as â answer the given question using the pro- vided knowledgeâ , can encourage retrieval-centric behavior from the LLM, although further refined prompt-engineering may be necessary for optimal performance. As shown in Table 2, RCG effectively addressed the query and produced accurate results. When combining RCG with a 50% Explicit Prompt- Weighting (RCG-EPW), where only the first 50% of retrieved knowledge base are injected into the prompt, the model generated partial responses but still maintained factual accuracy. In contrast, the pure LLM without RC enabled (ROG) provided hallucinatory responses, highlighting the impor- tance of incorporating retrieval-centric techniques in language models for improved performance. Approach RCG Response Kioxia developed a high capacity storage device which could store large amounts of image data required by AI algorithms. This allowed them to deploy memory centric AI systems capable of handling multiple classi- fication tasks simultaneously without com- promising performance or accuracy. RCG-EPW Deployment of Memory-centric AI with ROG high capacity storage The company developed a new image clas- sification algorithm that achieved top-1 ac- curacy of 82.7% on the ImageNet valida- tion set, which is an improvement over pre- vious state-of-the-art results by more than one percentage point. This was made possi- ble through advancements in hardware and software technologies such as Tensor Core GPUs and our own deep learning frame- work, ONNX-MXNET.
2308.03983#11
2308.03983#13
2308.03983
[ "2302.13971" ]
2308.03983#13
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Table 2: Effects of Retrieval-Centric Generation in Sim- plyRetrieve, based on the knowledge base about an or- ganization. # 4.2 Accuracy & Speed Evaluations In addition to evaluating the effectiveness of RCG using human annotations, we also conduct an in- ternal evaluation of our toolâ s performance using a self-generated dataset. To create this dataset, we pass relevant passages through the language model Llama-2-13B-chat (Touvron et al., 2023b) to gen- erate 10 query and label pairs. For details on how we generated this dataset, refer to Appendix A.4. We employ Rouge-L score (Lin, 2004) as our per- formance metric. We perform this evaluation by using the API function of SimplyRetrieve. Our results in Table 3 show that RCG significantly im- proves the Rouge-L score compared to the baseline approach of ROG, while also slightly more com- petitive than RAG. Moreover, despite the fact that RCG processes longer prompts than ROG due to the addition of knowledge tokens, we observe a decrease in processing time owing to the increased precision and brevity of the generated responses. Specifically, number of response tokens generated in RCG are in average 36% less than those gen- erated in ROG. This efficient performance may facilitate broader adoption within the community, as users can expect quicker response generation without sacrificing accuracy. Approach ROG RAG RCG Rouge-L Score 0.186 0.359 0.413 time/query(s) 17.22 18.41 11.67 Table 3: Response accuracy & speed evaluation of Sim- plyRetrieve. Finally, our findings suggest that even a mod- estly sized LLM of 13B parameters can demon- strate satisfactory performance in RCG approach towards never-seen-before factual knowledge with- out any model fine-tuning, potentially facilitates the deployment of Generative AI systems in real- world scenarios. See Appendix A.2 for further discussions and A.5 for ablation studies. # 5 Conclusion
2308.03983#12
2308.03983#14
2308.03983
[ "2302.13971" ]
2308.03983#14
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
We introduced SimplyRetrieve, an open-source tool that aims to provide a localizable, lightweight, and user-friendly GUI and API platform for a Retrieval- Centric Generation approach based on LLMs. Our tool enables developers and end-users to easily in- teract and develop with a privacy-preserving and lo- cally implemented LLM-based RCG system, which we believe will contribute to the democratization of these technologies within the machine learning community. Increased clarity in role-separation be- tween context interpretation and knowledge memo- rization can potentially boost the performance and interpretability of generative AI systems, facilitat- ing deployments. # Limitations It is important to note that this tool does not provide a foolproof solution for ensuring a completely safe and responsible response from generative AI mod- els, even within a retrieval-centric approach.
2308.03983#13
2308.03983#15
2308.03983
[ "2302.13971" ]
2308.03983#15
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
The development of safer, interpretable, and responsi- ble AI systems remains an active area of research and ongoing effort. Generated texts from this tool may exhibit varia- tions, even when only slightly modifying prompts or queries, due to the next token prediction behav- ior of current-generation LLMs. This means users may need to carefully fine-tune both the prompts and queries to obtain optimal responses. # References Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Zou. 2019.
2308.03983#14
2308.03983#16
2308.03983
[ "2302.13971" ]
2308.03983#16
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Gradio: Hassle-free sharing and testing of ml models in the wild. arXiv preprint arXiv:1906.02569. Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Mag- giore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2206â 2240. PMLR. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. In Ad- Language models are few-shot learners. vances in Neural Information Processing Systems, volume 33, pages 1877â
2308.03983#15
2308.03983#17
2308.03983
[ "2302.13971" ]
2308.03983#17
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
1901. Curran Associates, Inc. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2023. Quantifying memorization across neural lan- guage models. In The Eleventh International Confer- ence on Learning Representations. Harrison Chase. 2022. LangChain. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023.
2308.03983#16
2308.03983#18
2308.03983
[ "2302.13971" ]
2308.03983#18
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Mor- eira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022.
2308.03983#17
2308.03983#19
2308.03983
[ "2302.13971" ]
2308.03983#19
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020.
2308.03983#18
2308.03983#20
2308.03983
[ "2302.13971" ]
2308.03983#20
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Realm: Retrieval- augmented language model pre-training. In Proceed- ings of the 37th International Conference on Machine Learning, ICMLâ 20. JMLR.org. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Ahmet Iscen, Alireza Fathi, and Cordelia Schmid. 2023. Improving image recognition by retrieving from In Proceedings of the web-scale image-text data. IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 19295â
2308.03983#19
2308.03983#21
2308.03983
[ "2302.13971" ]
2308.03983#21
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
19304. Peter Izsak, Moshe Berchansky, Daniel Fleischer, and Ronen Laperdon. 2023. fastRAG: Efficient Retrieval Augmentation and Generation Framework. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. IEEE Billion-scale similarity search with GPUs. Transactions on Big Data, 7(3):535â 547. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â 6781, Online. Association for Computational Linguistics.
2308.03983#20
2308.03983#22
2308.03983
[ "2302.13971" ]
2308.03983#22
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In Advances in Neural Information Processing Systems. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459â
2308.03983#21
2308.03983#23
2308.03983
[ "2302.13971" ]
2308.03983#23
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
9474. Curran Associates, Inc. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â 81, Barcelona, Spain. Association for Computational Linguistics. Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using IEEE hierarchical navigable small world graphs. Trans. Pattern Anal. Mach. Intell., 42(4):824â 836.
2308.03983#22
2308.03983#24
2308.03983
[ "2302.13971" ]
2308.03983#24
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 9802â 9822, Toronto, Canada. Association for Computational Linguistics. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906â 1919, On- line. Association for Computational Linguistics. Kengo Nakata, Youyang Ng, Daisuke Miyashita, Asuka Maki, Yu-Chieh Lin, and Jun Deguchi. 2022. Re- visiting a knn-based image classification system In Computer Vision â
2308.03983#23
2308.03983#25
2308.03983
[ "2302.13971" ]
2308.03983#25
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
with high-capacity storage. ECCV 2022, pages 457â 474, Cham. Springer Nature Switzerland. OpenAI. 2023. Chatgpt. https://openai.com/blog/ chatgpt. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730â
2308.03983#24
2308.03983#26
2308.03983
[ "2302.13971" ]
2308.03983#26
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
27744. Curran Associates, Inc. Malte Pietsch, Timo Möller, Bogdan Kostic, Julian Risch, Massimiliano Pippi, Mayank Jobanputra, Sara Zanzottera, Silvano Cerza, Vladimir Blagojevic, Thomas Stadelmann, Tanay Soni, and Sebastian Lee. 2019. Haystack: the end-to-end NLP framework for pragmatic builders. PrivateGPT. PrivateGPT.
2308.03983#25
2308.03983#27
2308.03983
[ "2302.13971" ]
2308.03983#27
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Accessed: 2023-07-04. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b.
2308.03983#26
2308.03983#28
2308.03983
[ "2302.13971" ]
2308.03983#28
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Llama 2: Open foundation and fine- tuned chat models. arXiv preprint arXiv:2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv preprint arXiv:2212.03533. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emer- gent abilities of large language models. Transactions on Machine Learning Research.
2308.03983#27
2308.03983#29
2308.03983
[ "2302.13971" ]
2308.03983#29
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Survey Certifica- tion. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022b. Chain-of-thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824â 24837. Curran Associates, Inc. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing:
2308.03983#28
2308.03983#30
2308.03983
[ "2302.13971" ]
2308.03983#30
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
System Demonstrations, pages 38â 45, Online. Association for Computational Linguistics. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large lan- guage models to follow complex instructions. arXiv preprint arXiv:2304.12244. # A Appendix # A.1 GUI Design of Retrieval Tuning Module Figure 4 shows the GUI design of prompt- engineering interface. Figure 5 shows the GUI design of tool configuration interface. Figure 6 shows the GUI design of analysis and data logging interface. # A.2 Applications SimplyRetrieve has vast potential for various prac- tical applications. For instance, it can serve as the foundation for building private, personalized, and lightweight generative AI systems. Sensitive and personal information can be securely stored and processed within the retrieval-centric platform. This approach enables organizations to develop interpretable and locally tailored generative AI sys- tems for critical infrastructure. Additionally, the use of a relatively smaller language model as a contextual interpreter in this approach facilitates seamless integration into edge computing environ- ments. The decreasing costs of data storage devices also make it feasible to establish large-scale knowl- edge bases. Furthermore, SimplyRetrieve paves the way for the development of LLM-based person- alized AI assistants. Lastly, an in-depth exploration of LLM-based retrieval-centric generation using SimplyRetrieve may offer valuable insights and opportunities for future research. # A.3 Prompt Catalogs Table 5 shows the prompts used in the evaluation results of Section 4 while Table 6 shows sample prompts that may exhibit retrieval-centric behav- iors. Prompts are passed to LLM in the following format: AI Prefix + Retriever Prefix + Retrieved Knowledge Base + Retriever Suffix + Model Prefix + Query + Model Suffix. # A.4 Evaluation Data Table 7 presents the data used for evaluating the per- formance of our proposed tool in Section 4.2. We employed the Llama-2-13B-chat model (Touvron et al., 2023b) with a customized prompt ("relevant information." Please create a query and answer from the paragraph above) to generate query and label pairs automatically from relevant information on the website of an organization. # A.5 Ablation Study
2308.03983#29
2308.03983#31
2308.03983
[ "2302.13971" ]
2308.03983#31
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
As shown in Table 4, our ablation study reveals that adjusting Explicit Prompt-Weighting in SimplyRe- trieve leads to significant improvements in Rouge- L scores. Interestingly, increasing the weightage to 50% yields the highest improvement, beyond which the performance remains relatively stable. This suggests that the top 50% of retrieved knowl- edge bases are crucial for achieving high accuracy. However, it is important to note that these findings may not generalize to all datasets or knowledge bases, and further investigation may be necessary to determine optimal weightages for specific use cases. In comparing the response times for each query across different settings, we observe that the response times remain relatively consistent for all cases of RCG, while they increase significantly in the baseline (ROG) setting. Despite the fact that RCG processes longer prompts than the baseline, we observe a decrease in processing time owing to the increased precision and brevity of the generated responses. Approach ROG RCG-EPW-10 RCG-EPW-20 RCG-EPW-30 RCG-EPW-40 RCG-EPW-50 RCG-EPW-60 RCG-EPW-70 RCG-EPW-80 RCG-EPW-90 RCG Rouge-L 0.186 0.275 0.313 0.403 0.354 0.414 0.331 0.392 0.306 0.378 0.413 time/query(s) 17.22 12.72 13.00 13.06 11.98 12.46 11.36 13.56 16.32 13.13 11.67 Table 4: Ablation study of Explicit Prompt-Weighting in SimplyRetrieve. AI Prefix Retriever Prefix " Retriever Suffix " answer the following question with the provided knowledge. Model Prefix Model Suffix AI: Table 5: Prompts used in the evaluation results of Section 4.
2308.03983#30
2308.03983#32
2308.03983
[ "2302.13971" ]
2308.03983#32
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
AI Prefix you are a Retrieval-Centric AI. Knowledge below are provided. Retriever Prefix " Retriever Suffix " only use the provided knowledge to answer the following question. Model Prefix Model Suffix Response: " " answer the following ques- tion with the provided knowledge. AI: " " only use the provided knowledge to answer the following question. AI: you are a Retrieval-Centric AI. Knowledge below are provided. " " only use the provided knowledge to answer the following question. AI: Table 6: Sample Prompts Catalog of Retrieval-Centric Generation in SimplyRetrieve. Chat Prompt Config Analysis Prompt =Al Prefix + Retriever Prefix + Retrieved KnowledgeBase + Retriever Suffix + Model Prefix + Query + Model Suffix Model Prefix Nodal Sus Model-related Prompts Retriever Prefix Retriever Suffix Retrieval-related Prompts answer the following question with the provided knowledge. Apes Al Prompt Prompts will be saved to subdirectory of prompts in separate files Update Prompts Save Prompts Figure 4: The Prompt-Engineering interface of SimplyRetrieve. The Tab is for editing, updating and saving of model-related and retrieval-related prompts. Available prompts are AI Prefix, Retriever Prefix, Retriever Suffix, Model Prefix and Model Suffix. Chat Prompt Config. Analysis Config File Config Editing { "Ulm _config": { â model_argsâ : { â model_typeâ : "/volt/user/youyang/hf_models/Wizard-Vicuna-138-Uncensored', *device_map":{ â 0 Config Updating and Saving â Save Path Update Config Save Config configs/default_chat1_paper_new.json Figure 5: The Tool Configuration interface of SimplyRetrieve. The Tab is for modifying, updating and saving all configurable settings.
2308.03983#31
2308.03983#33
2308.03983
[ "2302.13971" ]
2308.03983#33
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
sta Logsing Sentence & Token Levels Retrieval Similarity Analysis Query & KnowledgeBase Sentence Level Similarity Score Query & KnowledgeBase Tokens Level Similarity Score 0.8461886476455748, 0.7471896810316315 Response & KnowledgeBase Sentence Level Similarity Score Response & KnowledgeBase Tokens Level Similarity Sore 0.874272306649239 0.7319581929734684 Query Prompts Analysis Tell me about the cutting-edge deep learning research in 2023. "As of March 31, 2023 * What is NAND Flash Memory? 2023/06/21 20:48 System technology/Development of Image Classification System Deploying Memory-Centric Al with High-capacity Storage |... https://wwwkioxia.com/en {p/rd/technology/topics/topics-39.html 1/3 Development of image Classidpcation System Deploying Memory-Centric Response Alwith High-capacity Storage January 12, 2023 In deep learning, neural networks can and Big Data Automated Cleanrooms oT Infrastructure and Big Data Sophisticated Al-based Analysis Image Classi@cation Using Deep The cutting-edge deep learning research in 2023 focuses on developing memory-centric artificial intelligence that Learning Digital Twin and related awards Fieldwork Improvement Using 2023/06/21 20:48 System ogs Display & tore https://www-kioxia.com/en-jp/rd/technology/topics/topics-39.html 2 2 Fig3 Accuracy comparison when Increasing Save Log Path analysis/logs.csv Figure 6: The Analysis and Data Logging interface of SimplyRetrieve. The Tab is for analysis and logging of retrieved knowledge base, query, response, retrieval sentence level and tokens level similarity scores. Users can save all the logged data for further analysis.
2308.03983#32
2308.03983#34
2308.03983
[ "2302.13971" ]
2308.03983#34
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
What is the purpose of es- tablishing KIOXIA Iwate Corporation? When did Kioxia start op- erating its new fabrication facility (Fab7)? What are some ways in which the Yokkaichi Plant contributes to society? What is KIOXIAâ s ap- proach to corporate social responsibility? What kind of environmen- tal initiatives has KIOXIA its Yokkaichi taken at Plant? What was the total rev- enue for the Kioxia group in fiscal year 21? What initiatives has the Yokkaichi Plant taken to protect the environment? What is KIOXIAâ
2308.03983#33
2308.03983#35
2308.03983
[ "2302.13971" ]
2308.03983#35
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
s mis- sion statement? What is the purpose of collecting over two bil- lion data items daily? According to the text, Kioxia started operating its new fabrication facility (Fab7) in the fall of 2022. The Yokkaichi Plant contributes to society by manufacturing high-quality flash memory and SSDs that support the digitalization of various industries, as well as through its research and development (R&D) activities. Additionally, the plant prioritizes legal compliance, environmental protection, and social initiatives, making it a responsible corporate citizen. According to the passage, KIOXIAâ s approach to corporate social responsibility involves making their Yokkaichi Plant a factory that is admired and trusted by everyone in the community. They aim to contribute to local development and meet the expectations of local communities by being a responsible member of society. At KIOXIAâ s Yokkaichi Plant, all employees, including resident suppliers, actively partici- pate in various initiatives to contribute to society by improving the environment. These include measures to mitigate the environmental burden of the companyâ s semiconductor manufacturing facilities. According to the passage, the total revenue for the Kioxia group in fiscal year 21 was 1,526.5 billion yen. The Yokkaichi Plant has taken various initiatives such as reducing greenhouse gas emis- sions, conserving water resources, and implementing recycling programs to protect the environment. These efforts aim to not only reduce the plantâ s ecological footprint but also raise awareness among local communities about the importance of environmental conservation.
2308.03983#34
2308.03983#36
2308.03983
[ "2302.13971" ]
2308.03983#36
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
According to the passage, KIOXIAâ s mission statement is to uplift the world with â mem- oryâ , which means they aim to provide high-quality memory and SSD products that offer fast speeds, large capacities, and reliable performance. Additionally, the company seeks to achieve this goal through collaboration and knowledge sharing across different depart- ments and organizations. To maintain high-quality products by analyzing the vast amounts of data generated during the manufacturing process using advanced technologies like deep learning and AI. Table 7: Dataset used in the evaluation results of Section 4.2.
2308.03983#35
2308.03983
[ "2302.13971" ]
2308.03688#0
AgentBench: Evaluating LLMs as Agents
3 2 0 2 t c O 5 2 ] I A . s c [ 2 v 8 8 6 3 0 . 8 0 3 2 : v i X r a Technical Report (v0.2) # AGENTBENCH: EVALUATING LLMS AS AGENTS Xiao Liu1,*, Hao Yu1,*, Hanchen Zhang1, Yifan Xu1, Xuanyu Lei1, Hanyu Lai1, Yu Gu2, Hangliang Ding1, Kaiwen Men1, Kejuan Yang1, Shudan Zhang1, Xiang Deng2, Aohan Zeng1, Zhengxiao Du1, Chenhui Zhang1, Sheng Shen3, Tianjun Zhang3, Yu Su2, Huan Sun2, Minlie Huang1, Yuxiao Dong1, Jie Tang1 1Tsinghua University, 2The Ohio State University, 3UC Berkeley # ABSTRACT Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AGENTBENCH, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agentâ s reasoning and decision-making abilities in a multi-turn open- ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AGENTBENCH are released at https:// github.com/THUDM/AgentBench.
2308.03688#1
2308.03688
[ "2204.02311" ]
2308.03688#1
AgentBench: Evaluating LLMs as Agents
Operating System T ~ â gpt-4 4.01 Web i -2 Browsing Database c aude? 4 gpt-3.5-turbo API-based text-davinci-003 aah Commercial \ claude-instant i LLMs | | \ chat-bison-001 i { sho wee | t NS iy je text-davinci-002 ; f ping | P codellama-34b | 0.96 ! vicuna-13b | 0.93 i llama-2-70b tn 0.78 i llama-2-13b fen 0.77 4 Jaital H I OSS LLMs igita j i House-holding Card Game dolly: 0.14 A chatglm-6b }-0.11! ! Lateral Thinking Puzzle oasst-12b 0.03! f lava: lava: Mm gpt-4 (0613) lm chat-bison-001 lim llama-2-13b iAvgi0.51 sAvgi2.15 l@l claude-2 lM llama-2-70b MM vicuna-13b-v1.3 i} 1 2 3 4 MMM gpt-3.5-turbo (0613) MINN codellama-34b-instruct [â ¢ll dolly-12b AgentBench Overall Score (a) Typical LLMsâ AgentBench performance (b) Overall scores of AgentBench across 8 environ (Relative) against the best in each environment -ments. Dashed lines for two LLM typesâ average. Figure 1: An overview of LLMs on AGENTBENCH. While LLMs begin to manifest their proficiency in LLM-as-Agent, gaps between models and the distance toward practical usability are significant. # INTRODUCTION Intelligent agents and autonomous entities (Searle, 1970; Maes, 1994; Wooldridge & Jennings, 1995) that are capable of decision-making and action execution in particular environments have been key XL and HY are lead authors that contributed equally. Email: {shawliu9,longinyh}@gmail.com â Work partially done when HY, YG visited Tsinghua University. â ¡Website for AGENTBENCH leaderboard & demos: https://llmbench.ai/agent 1
2308.03688#0
2308.03688#2
2308.03688
[ "2204.02311" ]
2308.03688#2
AgentBench: Evaluating LLMs as Agents
Technical Report (v0.2) Real-world Challenges 8 Distinct Environments (On an Ubuntu bash terminal) Recursively set all files in the directory to read-only, except those of mine. Operating (Given Freebase APIs) System Database What musical instruments do Minnesota- born Nobel Prize winners play? LLM-as-Agent (Given MySQL APIs and existed tables) H Large > ga Grade students over 60 as PASS in the table. Language Knowledge | [Digital card â Models Graph Game (On the GUI of Aquawar) Agent This is a two-player battle game, you are a player with four pet fish cards ...... a â A man walked into a restaurant, ordered a bow! ee ? of turtle soup, and after finishing it, he =} Interactive House Thi committed suicide. Why did he do that? ' : Hota Lateral Think Environ } Environment: | Holding -ing Puzzles (in the middle of a kitchen in a simulator) =ment 4 __ Please put a pan on the dinning table. â | â
2308.03688#1
2308.03688#3
2308.03688
[ "2204.02311" ]
2308.03688#3
AgentBench: Evaluating LLMs as Agents
||Sa (On the official website of an airline) 7 a Book the cheapest flight from Beijing to Los. web web Angeles in the last week of July. Shopping am Browsing [== == Figure 2: AGENTBENCH is the first systematic benchmark to evaluate LLM-as-Agent on a wide array of real-world challenges and 8 distinct environments. In total, 27 LLMs are examined in this edition. concepts of artificial intelligence (AI) historically. Notwithstanding substantial advancements in deep learning algorithms applied in both computer vision and natural language processing (NLP), their potential for developing efficient and practically usable assisting agents remains largely unexplored. The advent of Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023), such as GPT-4 (OpenAI, 2023), has brought plenty of new opportunities to this realm. Through extensive alignment training (Ouyang et al., 2022; Wei et al., 2022a; Sanh et al., 2022), LLMs have not only mastered traditional NLP tasks but also showcased an impressive ability to comprehend human intent and execute instructions. This has spurred the development of various LLM-based applications for autonomous goal completion (like AutoGPT (Richards, 2023), BabyAGI (Nakajima, 2023), AgentGPT (age, 2023)) as well as LLM agents situated in social and game contexts (Park et al., 2023; Wang et al., 2023b; Zhu et al., 2023), sparking substantial public interest and discussions. Despite these advancements, the lack of a systematic and standard benchmark to evaluate LLM-as- Agent presents a critical challenge. Historically, text-based game environments (Osborne et al., 2022; Côté et al., 2019; Hausknecht et al., 2020; Urbanek et al., 2019) have been employed for language agent evaluation. But they often suffer from the limitation of closed, discrete action spaces, as well as their primarily narrow focus on modelsâ commonsense grounding.
2308.03688#2
2308.03688#4
2308.03688
[ "2204.02311" ]
2308.03688#4
AgentBench: Evaluating LLMs as Agents
More recently, attempts on embodied agents (Reed et al., 2022; Huang et al., 2022; Ahn et al., 2022) have employed complicated multi-modal simulators based on games (Küttler et al., 2020; Fan et al., 2022), GUI (Shi et al., 2017; Toyama et al., 2021), and indoor scenes (Shen et al., 2021; Srivastava et al., 2022). However, these simulators, despite their complexity, do not accurately reflect the practical use cases of LLMs, and their multi-modal nature creates a hurdle for the urgent evaluation of existing text-only LLMs. Finally, most benchmarks now for agents focus on single environments and thus fail to provide a comprehensive overview of LLMs across diverse application scenarios. To address these challenges, we introduce AGENTBENCH, a multi-dimensional benchmark designed to evaluate LLM-as-Agent across a spectrum of different environments. AGENTBENCH encompasses eight distinct environments (Cf. Figure 4), which could be categorized into three types of groundings: Code: Operating System, Database, Knowledge Graph (Anonymous, 2023) â ¢ Game: Digital Card Game, Lateral Thinking Puzzles, House-Holding (Shridhar et al., 2020b) â ¢ Web: Web Shopping (Yao et al., 2022), Web Browsing (Deng et al., 2023) All datasets, whether newly created or adapted from existent ones, are meticulously designed and reformulated to simulate interactive environments where text-only LLMs can operate as autonomous agents. AGENTBENCH thus systematically evaluate an LLMâ s core abilities, including following in- structions (Ouyang et al., 2022), coding (Chen et al., 2021), knowledge acquisition (Joshi et al., 2017; Talmor et al., 2019), logical reasoning (Srivastava et al., 2023), and commonsense grounding (Shridhar et al., 2020a). It serves as an ideal testbed for both LLM and agent evaluation.
2308.03688#3
2308.03688#5
2308.03688
[ "2204.02311" ]
2308.03688#5
AgentBench: Evaluating LLMs as Agents
In addition, we develop a unified evaluation toolkit for LLMs to operate on diverse customized agent tasks, thus enabling a comprehensive benchmarking of the LLM-as-Agent ability of 27 different LLMs on AGENTBENCH, including both API-based and OSS models. Our results reveal that top-tier 2 # Technical Report (v0.2) Table 1: AGENTBENCH evaluates 27 API-based or OSS LLMs on LLM-as-Agent challenges. Model #Size Form Ver. Creator Model #Size Form Ver.
2308.03688#4
2308.03688#6
2308.03688
[ "2204.02311" ]
2308.03688#6
AgentBench: Evaluating LLMs as Agents
Creator gpt-4 (OpenAI, 2023) gpt-3.5-turbo (OpenAI, 2022) text-davinci-003 (Ouyang et al., 2022) text-davinci-002 (Ouyang et al., 2022) claude-2 (Anthropic, 2023b) claude (Anthropic, 2023a) claude-instant (Anthropic, 2023a) chat-bison-001 (Anil et al., 2023) chatglm-6b (Zeng et al., 2022; Du et al., 2022) 6B open v1.1 codegeex2-6b (Zheng et al., 2023) codellama-34b (Rozière et al., 2023) codellama-13b (Rozière et al., 2023) codellama-7b (Rozière et al., 2023) dolly-12b (Conover et al., 2023) llama2-70b (Touvron et al., 2023) llama2-13b (Touvron et al., 2023) llama2-7b (Touvron et al., 2023) guanaco-65b (Dettmers et al., 2023) 65B open guanaco-33b (Dettmers et al., 2023) 33B open vicuna-33b (Chiang et al., 2023) vicuna-13b (Chiang et al., 2023) N/A api N/A api N/A api N/A api N/A api N/A api N/A api N/A api 70B open chat 13B open chat 7B open chat 0613 0613 - - - v1.3 v1.1 - Meta OpenAI - - Meta Anthropic 33B open v1.3 13B open v1.5 7B open v1.5 LMSYS Google vicuna-7b (Chiang et al., 2023) Tsinghua & Zhipu wizardlm-30b (Xu et al., 2023) wizardlm-13b (Xu et al., 2023) koala-13b (Geng et al., 2023) oasst-12b (LAION, 2023) openchat-13b (Wang et al., 2023a) 13B open v3.2 Tsinghua 30B open v1.0 13B open v1.0 13B open - 12B open sft-4 LAION 6B open 34B open instruct 13B open instruct 7B open instruct 12B open - Microsoft Meta UCB v2 Databricks
2308.03688#5
2308.03688#7
2308.03688
[ "2204.02311" ]
2308.03688#7
AgentBench: Evaluating LLMs as Agents
models like GPT-4 are capable of handling a wide array of real-world tasks, indicating the potential for developing a potent, continuously learning agent. However, we also note a significant performance gap between these top-tier models and their OSS competitors. Despite the recent success of OSS LLMs and their competitive scores on several benchmarks (Li et al., 2023; Chen et al., 2021; Cobbe et al., 2021), their performance on the challenging AGENTBENCH tasks lags considerably. This underscores the necessity for additional efforts to enhance the learning abilities of OSS LLMs. We identify portions of agent task failures in different environments and LLMs, unveiling the insufficient abilities of long-term reasoning, decision-making, and instruction following in existing LLMs. Comparisons between different LLMs manifest that a proper strategy of introducing code training can help improve LLM-as-Agent. Alignment training over high-quality data (e.g., data generated by gpt-4) could also help improve LLM agents. In summary, our contributions are: We introduce the concept of evaluating LLMs as agents and present AGENTBENCH, a compre- hensive benchmark to standardize the evaluation. It defines eight distinct environments of 3 types based on real-world scenarios, offering a practical testbed for LLMsâ wide array of capabilities. â ¢ We perform a thorough evaluation of 27 different LLMs using AGENTBENCH, uncovering a significant performance gap between leading API-based commercial LLMs and OSS models. We also quantitatively analyze the reasons for failures in existing LLM agents and highlight directions for improvement, such as code training and higher-quality alignment data.
2308.03688#6
2308.03688#8
2308.03688
[ "2204.02311" ]
2308.03688#8
AgentBench: Evaluating LLMs as Agents
â ¢ To facilitate the evaluation of LLM-as-Agent, we have introduced an integrated toolkit grounded in the Server-Client architecture, focusing on modular and scalable design principles. This enables easy customization of model assessments for any LLMs using the HTTP protocol. Complemented by its associated datasets and environments, this toolkit is now openly accessible to the broader research community. # 2 LLM-AS-AGENT: DEFINITION AND PRELIMINARY Here, we formalize the terms for describing the evaluation of LLMs as agents and the necessary preliminary knowledge for using LLMs in the context of agent evaluation. Definition: Interactive Evaluation of LLM-as-Agent. The interactive evaluation of LLM-as-Agent could be regarded as a Partially Observable Markov Decision Process (S, A, T , R, U, O), which comprises state space S, action space A, transition function T : S Ã A â S, reward assigning function R, task instruction space U, and observation space O. Here, we denote an LLM agent as M. Chain-of-Thought (CoT) and Other Reasoning Strategies. Since LLM-as-Agent requires LLMsâ strong reasoning ability, CoT (Wei et al., 2022b), which has been considered a de facto strategy in related evaluation together with actions (Yao et al., 2023b), is also adopted in AGENTBENCH. Despite many improved strategies proposed later, such as introducing ensemble (Wang et al., 2023c), reflection (Shinn et al., 2023), and search (Yao et al., 2023a), we evaluate LLMs with the most primitive CoT in AGENTBENCH. Without multiple trials, repeated generations, or complicated strategies, CoT is the easiest, cheapest, and most common way for people to deploy LLM agents.
2308.03688#7
2308.03688#9
2308.03688
[ "2204.02311" ]
2308.03688#9
AgentBench: Evaluating LLMs as Agents
Typical Types of Finish Reasons. Despite LLMsâ capabilities, we show in AGENTBENCH that even the strongest gpt-4 is not qualified as a practically usable agent. We identify and categorize finish reasons of LLM agents on AGENTBENCH tasks into five typical types: 3 # Technical Report (v0.2) â ¢ Context Limit Exceeded (CLE): the length of interaction history exceeds the LLMâ s maximum context length (only happened in 2,048-length LLMs text-davinci-002 and 003). Invalid Format (IF): the agent does not follow the format instruction. â ¢ Invalid Action (IA): the agent follows the format instruction, but its selected action is invalid. â ¢ Task Limit Exceeded (TLE): the agent does not solve the problem after reaching the predefined maximum interaction turns or begins to do repeated generations for many turns. and Complete (task ends normally). While IF and IA are mostly caused by LLMsâ poor instruction following, TLE often indicates a weak multi-turn ability in certain tasks. # 3 COMPOSITION OF AGENTBENCH: A BRIEF LOOK In this section, we briefly introduce the datasets and environments that compose the AGENTBENCH. Compared to previous agent evaluation benchmarks (Côté et al., 2019; Fan et al., 2022), AGENT- BENCH concentrates on the practical evaluation of LLMs via Chain-of-Thought (CoT) (Wei et al., 2022b; Yao et al., 2023b) prompting, including code-grounded, game-grounded, and web-grounded scenarios. They pinpoint promising directions of LLMsâ applications with autonomous mission com- pletion, and their versatility avoids task-specific modelsâ (e.g., code-specific LLMs) overperformance on AGENTBENCH. Due to page limit, for details of construction, evaluation, and prompt examples, please refer to Appendix. 3.1 CODE-GROUNDED ENVIRONMENTS Since LLMs can generate high quality codes (Chen et al., 2021), a very practical mission for LLM agents is to assist human interaction with computer interfaces. Here, we introduce three three environments depending on coding and reasoning abilities as representatives in AGENTBENCH. Operating System (OS). Allowing LLMs to access and manipulate OS in the terminal is a fascinating but challenging mission.
2308.03688#8
2308.03688#10
2308.03688
[ "2204.02311" ]
2308.03688#10
AgentBench: Evaluating LLMs as Agents
Despite attempts on translating natural language to Shell commands (Lin et al., 2018), few prior efforts evaluate models in executable environments. We aim to evaluate LLMs in genuine OSâ interactive bash environments (i.e., Ubuntu Docker (Merkel et al., 2014)) on human questions with deterministic answers (e.g., number of users with non-/home directories in an OS.) or series of operations for practical goals (e.g., recursively set all directory files to read-only, excluding mine). We adopt the success rate (SR) as the evaluation metric. (Cf. Appendix B for more details) Database (DB). As database analysis is crucial but also difficult in many daily affairs, it is paramount to examine LLMsâ abilities to operate on real databases via SQL. Prior research has a significant emphasis on individual procedures, such as translation between SQL and natural language (Zhong et al., 2017), or answering questions given individual small tables (Nan et al., 2021; Iyyer et al., 2017). However, few consider evaluating models on the complete pipeline as a whole. Therefore, AGENTBENCH evaluates LLMs on authentic SQL interfaces, databases, multiple tables, and different types of queries as is in the real world. We adopt the SR as the main evaluation metric. (Cf. Appendix C for more details) Knowledge Graph (KG (Anonymous, 2023)). Engaging with contemporary KGs, which are often vast in size (e.g., FREEBASE (Bollacker et al., 2008) has over 45M entities and 3B facts), demands a broad range of skills from an intelligent agent (Gu et al., 2023). Operating in such environments, which are only partially observable, requires the agent to make decisions with incomplete information and manage inherent uncertainties with various skills, including language understanding (e.g., intricacies and subtleties), planning (e.g., breaking down instructions into more manageable components), and tool using (e.g., interact with KG interfaces). As a result, we propose KG as a representative testing ground to assess the decision-making abilities of AI agents. We adopt question answering as the basic task formulation and consequently the answer F1 as the metric. (Cf. Appendix D for more details) 3.2 GAME-GROUNDED ENVIRONMENTS
2308.03688#9
2308.03688#11
2308.03688
[ "2204.02311" ]
2308.03688#11
AgentBench: Evaluating LLMs as Agents
Playing games usually requires strong capabilities in designing strategies, following instructions, and reasoning. Compared to code-grounded, tasks in game-grounded environments require no expertise in coding but more integral grasping of commonsense and world knowledge. 4 Technical Report (v0.2) Digital Card Game (DCG). Games, especially those that require strategies and planning, could serve as simulated environments for intelligent agent development. DCG (e.g., Hearthstone (Hoover et al., 2020)), instead, is an ideal option for text-only LLM evaluation. It usually involves abundant text descriptions for cards, turn-based competition, and thoughtful playing strategies to win, testing a modelâ s understanding of game rules, operating logic, and abilities to form strategic decisions based on current conditions and past experiences in the game. In AGENTBENCH we adapt a simplified DCG systemâ
2308.03688#10
2308.03688#12
2308.03688
[ "2204.02311" ]
2308.03688#12
AgentBench: Evaluating LLMs as Agents
Aquawar1â from the 2021 Tsinghua Uni- versity Agent Competition (THUAC) hosted by Student Association for Science and Technology in Department of Computer Science and Technology (CST-SAST), for evaluating LLM-as-Agent. In Aquawar, the agent acts as a player managing a team of fishes with different talents to battle against another team (controlled by our ad-hoc baseline agent) in a turn-based form. We report LLMsâ win rate as the evaluation metric. (Cf. Appendix E for more details) Lateral Thinking Puzzles (LTP). Lateral thinking puzzles (Sloane, 1992), or situation puzzles, æµ· é¾ æ±¤, is a popular group-playing game around the world. The game usually has a person hosting the puzzle and others guess by asking riddle-related questions. The host can only respond â yesâ , â noâ , or â irrelevantâ .
2308.03688#11
2308.03688#13
2308.03688
[ "2204.02311" ]
2308.03688#13
AgentBench: Evaluating LLMs as Agents
The game is terminated when one of the player recovers the critical plots of the puzzle. Its name derives from the psychological term â lateral thinkingâ (De Bono, 1970), which refers to the ability of deducing facts from unconventional perspectives and exploring new ideas. In this dataset, we first set up an LTP host system for automatic judging (Cf. Appendix F). To assess LLMsâ lateral reasoning prowess, a diverse puzzle dataset is curated from web of varied levels of difficulty. We break down the true plot into several bullets and measure the portion of guessed-out bullets (i.e., game progress) when an agent exhausted the maximum number of playing rounds as the evaluation metric. Through this assessment, we aim to gain insights into the depth and agility of LLMsâ lateral reasoning abilities. (Cf. Appendix F for more details) House-Holding (HH, ALFWorld (Shridhar et al., 2020b)). Embodied game environments such as house-holding, which require strong commonsense grounding, have been well-established for language agent evaluation (Côté et al., 2019). In AGENTBENCH, we assess the modelâ s capability in accomplishing tasks in physical house-holding environments on the classical ALFWorld (Shridhar et al., 2020b) derived from the well-established text-game toolkit TextWorld (Côté et al., 2019). The agent needs to accomplish house-holding tasks such as â Put a pan on the dining tableâ .
2308.03688#12
2308.03688#14
2308.03688
[ "2204.02311" ]
2308.03688#14
AgentBench: Evaluating LLMs as Agents
We adopt the SR as the evaluation metric. (Cf. Appendix G for more details) 3.3 WEB-GROUNDED ENVIRONMENTS Web pages have been primary interfaces for people to interact in the real world. Thus, assessing LLM agentsâ behaviors in complex web environments would be critical and valuable for following development. Here, we adapt two existing web browsing datasets for practical evaluation over LLMs. Web Shopping (WS, WebShop (Yao et al., 2022)). Online shopping is a very practical and important part of modern life. Its trajectory, which comprises searching, viewing, and choosing desirable items on a real e-commerce website, requires autonomous agentsâ strong reasoning and decision-making abilities. Webshop (Yao et al., 2022), a simulated online shopping environment, exactly serves such a purpose for evaluating language agents. While it is originally evaluated on specifically trained models, we propose assessing LLMs with mere prompting. (Cf.
2308.03688#13
2308.03688#15
2308.03688
[ "2204.02311" ]
2308.03688#15
AgentBench: Evaluating LLMs as Agents
Appendix H for more details) Web Browsing (WB, Mind2Web (Deng et al., 2023)). General web environment is an ideal sandbox for training and evaluating intelligent agents. Mind2Web (Deng et al., 2023) is a very recently released general benchmark for developing and assessing web agents capable of executing intricate tasks across various website domains, given high-level user instructions. It designs feasible actions for website interactions, such as clicking, selecting, and typing, thereby facilitating a holistic evaluation of LLMs as web agents. Compared to Mind2Webâ s original setting, we make adaptations to allow its evaluation on prompted LLMs without additional fine-tuning. (Cf. Appendix I for more details) 1https://www.saiblo.net/ 5 # Technical Report (v0.2) Table 2: Statistics and metrics of 8 environments in AGENTBENCH evaluation. â SRâ stands for Success Rate. â #Avg. Turnâ denotes the estimated number of interacting turns to solve a single problem. In â #Devâ , and â #Testâ
2308.03688#14
2308.03688#16
2308.03688
[ "2204.02311" ]
2308.03688#16
AgentBench: Evaluating LLMs as Agents
, we provide the number of query samples and total expected interacting turns. Additionally, â Weightâ 1â refers to the average score for a task across all models in our evaluation. For further clarification, please refer to Section 4.1 and Appendix B to I. OS DB KG DCG LTP HH WS WB #Avg. Turn Metric #Dev #Test Weightâ 1 8 SR 26 / 240 5 SR 60 / 300 15 F1 20 / 300 30 25 Reward Game Progress 12 / 360 144 / 1200 300 / 1500 150 / 2250 20 / 600 20 / 500 50 / 1250 10.8 13.0 13.9 12.0 3.5 10 5 35 Step SR Reward SR 20 / 700 31 / 400 80 / 400 50 / 1750 200 / 1000 177 / 1800 13.0 30.7 11.6 # 4 EVALUATION OF AGENTBENCH We extensively evaluate 27 LLMs, including API-based commercial models and open-sourced LLMs, to form a systematic view of the existing performance of LLM-as-Agent. We also design and release a simple plug-and-play evaluation toolkit to facilitate related LLM-as-Agent research. 4.1 EVALUATION SETUP Dataset Statistics. We report the statistics of datasets in AGENTBENCH in Table 2. For simplicity, we use the abbreviation of each dataset in the following part. All datasets are practical multi-turn interacting challenges, and their estimated solving turns for each individual problem range from 5 to 50. We provide two splits for each dataset: Dev and Test. The Dev splitâ s all environments, answers, and checking scripts are public, while the Test is kept. We also carefully balance the evaluation comprehensiveness and efficiency in AGENTBENCH design, as LLMsâ multi-turn interaction can be time-consuming. We set the size of Dev and Test to 269 and 1,091, respectively, resulting in around 4k and 13k calls for inference, approximately the identical amounts of calls for inference as MMLU (Hendrycks et al., 2021b) requires. LLMs to Evaluate.
2308.03688#15
2308.03688#17
2308.03688
[ "2204.02311" ]
2308.03688#17
AgentBench: Evaluating LLMs as Agents
As a systematic attempt to benchmark existing LLMs on LLM-as-Agent, we include in total 27 models for evaluation, which could be roughly classified into two categories: ⠢ API-based Commercial LLMs: mainly consist of LLM APIs without disclosed parameter amounts (Cf. Table 1). Due to more investments, their performances are usually better. ⠢ Open-sourced (OSS) LLMs: mostly come from the academia and some companies (Cf. Table 1). Due to limited computing resources, we only include OSS LLMs smaller than 70B here. Toolkit: Streamlining LLM Evaluation with API-Centric Approach and Environment Isolation. As Language Model (LLM) systems continue to advance in complexity and are primarily accessible through APIs, we have developed an evaluation toolkit that aligns with the API-oriented philosophy. This toolkit is meticulously designed to interact with APIs, simplifying the process of adapting and testing different LLMs. Researchers interested in evaluating their LLMs on AGENTBENCH only need to set up a model server accessible via the HTTP protocol. Moreover, dealing with diverse and intricate interaction environments poses a significant challenge. Uniformly configuring all these environments can be arduous and may lead to conflicts. To address this, we have implemented two key strategies. Firstly, we encapsulate tasks with complex envi- ronments into Docker images. Researchers can effortlessly utilize these images by mounting the code path and initiating the evaluation process with ease. Secondly, we have subdivided each task into separate workers, ensuring that the environments of these tasks remain isolated and free from conflicts. (Refer to Appendix A for further details.) Evaluation Prompt Setup. To accommodate the majority of existing dialogue models, our dialogue paradigm is structured around two roles, user (i.e., instruction & environment feedback) and agent, engaging and alternating with one another. We record interaction trajectories as a conversation history (u0, a0, · · · , uk, ak) involving the user and agent, where ui, ai represents the i-th round of the conversation history. When we perform inference, the conversation history must be like
2308.03688#16
2308.03688#18
2308.03688
[ "2204.02311" ]
2308.03688#18
AgentBench: Evaluating LLMs as Agents
6 Technical Report (v0.2) Table 3: Test set (standard) results of AGENTBENCH. A clear performance gap exists between top commercial LLMs (e.g., gpt-4) and OSS LLM competitors. â VERâ stands for model version; â OAâ stands for the overall AGENTBENCH score, a weighted average of all environments (Cf. Section 4.1). LLM Type Models VER OA Code-grounded Game-grounded Web-grounded OS DB KG DCG LTP HH WS WB API gpt-4 claude-2 claude gpt-3.5-turbo text-davinci-003 claude-instant chat-bison-001 text-davinci-002 0613 - v1.3 0613 - v1.1 - - 4.01 2.49 2.44 2.32 1.71 1.60 1.39 1.25 42.4 18.1 9.7 32.6 20.1 16.7 9.7 8.3 32.0 27.3 22.0 36.7 16.3 18.0 19.7 16.7 58.8 41.3 38.9 25.9 34.9 20.8 23.0 41.5 74.5 55.5 40.9 33.7 3.0 5.9 16.6 11.8 16.6 8.4 8.2 10.5 7.1 12.6 4.4 0.5 78.0 54.0 58.0 16.0 20.0 30.0 18.0 16.0 61.1 61.4 55.7 64.1 61.7 49.7 60.5 56.3 29.0 0.0 25.0 20.0 26.0 4.0 12.0 9.0 OSS (Large) llama-2-70b guanaco-65b chat - 0.78 0.54 9.7 8.3 13.0 14.7 8.0 1.9 21.3 0.1 0.0 1.5 2.0 12.0 5.6 0.9 19.0 10.0 codellama-34b vicuna-33b wizardlm-30b guanaco-33b instruct v1.3 v1.0 - 0.96 0.73 0.46 0.39 2.8 15.3 13.9 11.1 14.0 11.0 12.7 9.3 23.5 1.2 2.9 3.2 8.4 16.3 0.3 0.3 0.7 1.0 1.8 0.0 4.0 6.0 6.0 6.0 52.1 23.9 4.4 6.2 20.0 7.0 1.0 5.0 OSS (Small) vicuna-13b llama-2-13b openchat-13b wizardlm-13b vicuna-7b codellama-13b codellama-7b koala-13b llama-2-7b codegeex2-6b dolly-12b chatglm-6b oasst-12b v1.5 chat v3.2 v1.2 v1.5 instruct instruct - chat - v2 v1.1 sft-4 0.93 0.77 0.70 0.66 0.56 0.56 0.50 0.34 0.34 0.27 0.14 0.11 0.03 10.4 4.2 15.3 9.0 9.7 3.5 4.9 3.5 4.2 1.4 0.0 4.9 1.4 6.7 11.7 12.3 12.7 8.7 9.7 12.7 5.0 8.0 0.0 0.0 0.3 0.0 9.4 3.6 5.5 1.7 2.5 10.4 8.2 0.4 2.1 4.8 0.0 0.0 0.0 0.1 26.4 0.1 1.9 0.3 0.0 0.0 0.1 6.9 0.3 0.1 0.0 0.0 8.0 0.0 0.0 0.0 6.4 0.0 0.0 4.4 0.0 0.0 1.2 0.0 0.0 8.0 6.0 0.0 10.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 41.7 25.3 46.9 43.7 2.2 43.8 25.2 3.9 11.6 20.9 0.4 0.5 0.3 12.0 13.0 15.0 12.0 9.0 14.0 12.0 7.0 7.0 11.0 9.0 4.9 1.0
2308.03688#17
2308.03688#19
2308.03688
[ "2204.02311" ]
2308.03688#19
AgentBench: Evaluating LLMs as Agents
(u0, a0, · · · , uk). We select the minimum r such that count of all tokens2 in (u0, ar, ur+1, · · · , uk) is not greater than 3500. And then we append "[NOTICE] 2r messages are omitted." into u0. After that, the sequence (u0, ar, ur+1, · · · , uk) is regarded as the final input in multi-turn chat format. However, in order to consider non-chat models, we append a post-processor. We feed the history into the model for chat models supporting multiple turns. For models supporting only text completion (e.g., text-davinci-003), we prepend "USER:" or "AGENT:" into each item in the history and finally append the string "AGENT:" to make models generate the agentâ s content. For task prompt organization, we adapted the format from (Yao et al., 2023b) to include both â Thoughtâ (for CoT) and â Actionâ but in one single turn. Usually, a simple CoT demonstration is provided in the task instruction for a better output format. To ensure reproducible results, we set temperature=0 (i.e., greedy decoding) in the inference on all tasks following (Wei et al., 2022b). Overall Score Calculation. We have observed that the score distribution for each task varies significantly as tasks differ in difficulty levels. As a consequence, a naively averaged score is heavily impacted by tasks that generally yield higher scores (e.g., Web Shopping in our observation), overshadowing those with lower scores and being unsuitable for AGENTBENCHâ
2308.03688#18
2308.03688#20
2308.03688
[ "2204.02311" ]
2308.03688#20
AgentBench: Evaluating LLMs as Agents
s purpose. Therefore, we produce the overall score by first resizing each taskâ s average score to 1 across all the models we evaluate and then averaging the scores across all tasks for each model (Cf. Table 2). To standardize and simplify score calculations for future studies, we utilize the reciprocal average score of all the tested LLMs in each task as a fixed weight for future overall score calculation. The total score is then computed as the average value obtained by multiplying the score of each task by its corresponding weight. This method ensures fairness and consistency in evaluation, enabling easier comparisons and analysis in future research. 2Because the tokenizers of each model is different, we simply calculate tokens like this: a word with length n occupies â n/6â token(s), and a non-blank character takes 1 token. 7 Technical Report (v0.2) # OS DB KG DCG LTP HH WS WB # Completed = 75.0 37.9 30.1 51.2 14.0 13.1 54.9 56.6 0.1 0.7 2.0 0.0 3.5 0.7 0.0 0.0 CLE Invalid Format 0.0 53.3 0.0 38.5 0.0 0.0 17.2 0.0 Invalid Action 0.9 0.0 0.0 10.2 0.0 64.1 0.0 8.4 23.9 8.0 67.9 0.0 82.5 22.1 27.8 35.0 TLE ° vicutt-13b codelidtha-34b 08 llama-2-13b openchat 13bp- vicuesap â '3ma-2-74b wizardim-13b â
2308.03688#19
2308.03688#21
2308.03688
[ "2204.02311" ]
2308.03688#21
AgentBench: Evaluating LLMs as Agents
= vicuna-7b [fj codellama-13b © Eicodellama-7b juanaco.bsb wizardim-30b @ guanaco-33b © a © ES AgentBench OA score o& koala-13b 5 Adare on ; © dolly-12b 004 TA sleaze 67 65 13 33 #Size (bil t ize (billion parameters) Table 4: Portions of different types of execution outcomes in 8 tasks. (CLE: Context Limit Exceeded, TLE: Task Limit Exceeded). Figure 3: AGENTBENCH OA scores with regard to all tested OSS LLMs. 4.2 MAIN RESULTS Overall and dataset-specific scores in AGENTBENCH are reported in Table 3. Surprisingly, on this challenging benchmark, we discover that some top LLMs are equipped with solid capabilities for dealing with real-world environmental interaction. For example, gpt-4 presents the best performance on 6 out of 8 datasets in AGENTBENCH; on HH, it achieves a success rate of 78%, indicating its practical usability in this scenario. claude-2 and claude follow gpt-4 but quite outperform gpt-3.5-turbo. Despite other API-based LLMsâ relatively poorer performance, regardless of tasks, most of them can solve quite a few percent of problems. All API-based LLMs have an AGENTBENCH overall score above 1.00. OSS LLMs, however, commonly fail to solve problems in some challenging tasks, such as KG, DCG, and HH. We plot their performance concerning their sizes in Figure 3. Generally, most open-sourced LLMs perform far poorer than API-based LLMs in AGENTBENCH (Avg. 0.51 v.s. 2.15). The most capable OSS LLM turns out to be codellama-34b, achieving an overall score of 0.96 but still presents a clear performance gap to gpt-3.5-turbo. This contrasts recent claims that some OSS LLMs are comparable to gpt-3.5-turbo and gpt-4. We still need much effort to produce stronger OSS LLMs to serve agent purposes. 4.3 ANALYSIS
2308.03688#20
2308.03688#22
2308.03688
[ "2204.02311" ]
2308.03688#22
AgentBench: Evaluating LLMs as Agents
In the evaluation, we analyze some important factors that impact an LLM agentâ s performance on AGENTBENCH, including outcome portion analysis, code training, and the difference between API-based commercial LLMs and OSS LLM competitors. More insights and case studies into the ability of planning, self-correction, and tool use are provided in Appendix J.2. Portion of Different Types of Execution Outcomes. We report ratios of different types of execution outcomes (Cf. Section 2 for introduction) in Table 4. It is Task Limit Exceeded that dominantly caused the incompleteness of AGENTBENCH tasks. It means that despite the instruction following of most LLM agents, they fail to solve the challenge in given time or fall into repeated generation when interaction turns grow up, indicating weak reasoning and decision-making abilities. In DB and DCG, LLM agents majorly encountered Invalid Format errors, meaning they do not correctly follow the instructionâ s format requirements. The format verification is stringent for DB, and no retry opportunities are provided. Furthermore, the taskâ s expected output may be close to certain modelsâ training data, yet not precisely aligned with. This discrepancy can lead the models to revert to their pre-trained formatting, inadvertently overlooking the specific requirements we provide. (Cf. Appendix J.2.1) For DCG, its instruction could be longer and more complicated than other tasks due to the need to introduce game rules, making some LLMs feel confused. In HH and WB, another major issue is about Invalid Action, where LLM agents generate actions beyond predefined action spaces. These two tasks provide many discrete action options at each turn, and many LLMs fail to generate an action from them and, therefore, cause errors. For specific ratios of each LLM, please refer to Appendix J.1.
2308.03688#21
2308.03688#23
2308.03688
[ "2204.02311" ]
2308.03688#23
AgentBench: Evaluating LLMs as Agents
Impact of Code Training. We find that code tuning might deeply influence a modelâ s way of inferential generation and thinking, even beyond topics just about coding. From the comparison of codellama and llama-2 series, tuning with code seems to give models an edge in tasks that follow a relatively static procedure (e.g., Web Shopping). But, this kind of tuning might also affect 8 # Technical Report (v0.2) the modelâ s general thinking ability, as codellama series does not perform as well in the Digital Card Game as llama-2 series. This points to a balance between being good at following procedures and being good at general thinking when tuning LLMs. Impact of High-Quality Alignment Data Training. Another helpful comparison would be between vicuna-13b and llama-2-13b. While they share the same base LLM, vicuna-13b is aligned by training on ShareGPTâ s data (generated by gpt-4 and gpt-3.5-turbo, shared by users) and llama-2-13b is aligned from scratch. As a result, vicuna-13b outperforms llama-2-13b on AGENTBENCH, and even performs comparably to 3 times larger codellama-34b. This indicates that high-quality alignment is still a key to develop better LLM agents. Unexpected Similar Performance of llama-2-13b and llama-2-70b. During our experi- ments, we were surprised to find that llama-2-13b and llama-2-70b perform similarly despite the significant gap between their sizes. After carefully checking and re-running experiments, the results are unchanged. We think that it indicates llama-2-70bâ s insufficient pre-training. While both llama-2-13b and llama-2-70b are pre-trained with 2T tokens, a larger LLM should be trained with more tokens according to the scaling law (Hoffmann et al., 2022). # 5 RELATED WORK Evaluation of LLMs.
2308.03688#22
2308.03688#24
2308.03688
[ "2204.02311" ]
2308.03688#24
AgentBench: Evaluating LLMs as Agents
The general capabilities of self-supervised (Liu et al., 2021) LLMs (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Scao et al., 2022; Zeng et al., 2022; Touvron et al., 2023), especially those chat-aligned ones (Ouyang et al., 2022; Anthropic, 2023a; OpenAI, 2023), have refreshed peopleâ s impression on deep learning systems and significantly transcended the conventional scope of NLP evaluation. It thus makes the evaluation of LLMs an urgent and challenging problem. Compared to previous efforts focusing on a subset of specified tasks (Wang et al., 2019; Wang et al.; Gehrmann et al., 2021), an increasing number of benchmarks are including broader spectra of tasks and datasets (Hendrycks et al., 2021b; Liang et al., 2022; Srivastava et al., 2023) in the evaluation. However, most of them are still limited to traditional tasks and thus fail to evaluate LLMsâ open-ended generation, multi-turn interaction, and ability to act as agents. LLM-as-Agent. In pre-LLM era, text game environments such as TextWorld (Côté et al., 2019), Jericho (Hausknecht et al., 2020), and LIGHT (Urbanek et al., 2019) are dominant in language agent study which bases on BERT (Devlin et al., 2019) and reinforcement learning. With the advent of LLMs, the study of LLM agents begins to thrive (Huang et al., 2022), especially after Chain-of- Thought (Wei et al., 2022b) came out. ReAct (Yao et al., 2023b) is a pioneer work to combine CoT reasoning and actions in agent tasks.
2308.03688#23
2308.03688#25
2308.03688
[ "2204.02311" ]
2308.03688#25
AgentBench: Evaluating LLMs as Agents
Later, a bunch of advanced reasoning strategies (Kim et al., 2023; Shinn et al., 2023; Wang et al., 2023d; Liu et al., 2023; Yao et al., 2023a; Gu et al., 2023) and applications (Park et al., 2023; Richards, 2023; Nakajima, 2023; age, 2023) for LLM-as-Agent have emerged and arouse much public interest. Nevertheless, limited datasets and models and available on the topic, without a standard and comprehensive benchmark. AGENTBENCH presents the first systematic benchmark for evaluating LLM-as-Agent with a broad coverage of tasks and available LLMs. Additionally, it also initiates the idea of adopting agent tasks to measure LLM performance. Evaluating LLMs in Executive Environments. As LLMs become increasingly capable of real- world challenges, there is also a trend to evaluate them in executive environments rather than static datasets. Besides text games (e.g., ALFWorld (Shridhar et al., 2020b)), another main stream of works lies in code execution. APPS (Hendrycks et al., 2021a), HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) pioneer the effort to evaluate code LLMs for functional correctness instead of text similarity. The paradigm has been later widely recognized and adopted in following works (Li et al., 2022; Zheng et al., 2023; Nijkamp et al., 2023). However, few previous code evaluation frameworks consider multi-turn interactions. A concurrent work InterCode (Yang et al., 2023) releases a framework that allows evaluation of interaction between models and Bash and SQL environments, which are similar to OS and DB tasks in AGENTBENCH. # 6 CONCLUSION We present AGENTBENCH, a systematically designed multi-dimensional evolving benchmark for evaluating LLMs as agents. For the first time, we include such a wide array of up to 8 real- 9 # Technical Report (v0.2) world challenges to evaluate LLM agents, and establish a unified testing framework and toolkit for agile evaluation. An extensive study of 27 LLMs, including API-based and Open-sourced, is carefully conducted in a standard setting.
2308.03688#24
2308.03688#26
2308.03688
[ "2204.02311" ]
2308.03688#26
AgentBench: Evaluating LLMs as Agents
In our assessment, contemporary commercial models have demonstrated preliminary capabilities as agents in analysis, planning, execution of plans, tool invocation, and self-reflection. These abilities suggest their nascent proficiency in addressing real- world challenges. Conversely, we posit that open-source models might either lack some of these competencies or, at best, possess only a subset of them simultaneously. We expect AGENTBENCH to serve as a cornerstone for later study to develop better and more applicable intelligent LLM agents. # REFERENCES Agentgpt. Python. https://github.com/reworkd/AgentGPT, 2023. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al.
2308.03688#25
2308.03688#27
2308.03688
[ "2204.02311" ]
2308.03688#27
AgentBench: Evaluating LLMs as Agents
Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Anonymous. Knowledge base question answering as tool learning. under review, 2023. Anthropic. Introducing claude, 2023a. URL https://www.anthropic.com/index/ introducing-claude. Anthropic. Claude 2, 2023b. URL https://www.anthropic.com/index/claude-2. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Jason Tsong-Li Wang (ed.), Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pp. 1247â
2308.03688#26
2308.03688#28
2308.03688
[ "2204.02311" ]
2308.03688#28
AgentBench: Evaluating LLMs as Agents
1250. ACM, 2008. doi: 10.1145/1376616.1376746. URL https://doi.org/10.1145/1376616.1376746. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2308.03688#27
2308.03688#29
2308.03688
[ "2204.02311" ]
2308.03688#29
AgentBench: Evaluating LLMs as Agents
Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPSâ 20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1026â 1036, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.91. URL https://aclanthology.org/2020.findings-emnlp.91. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al.
2308.03688#28
2308.03688#30
2308.03688
[ "2204.02311" ]
2308.03688#30
AgentBench: Evaluating LLMs as Agents
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2023. 10 Technical Report (v0.2) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al.
2308.03688#29
2308.03688#31
2308.03688
[ "2204.02311" ]
2308.03688#31
AgentBench: Evaluating LLMs as Agents
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin.
2308.03688#30
2308.03688#32
2308.03688
[ "2204.02311" ]
2308.03688#32
AgentBench: Evaluating LLMs as Agents
Free dolly: Introducing the worldâ s first truly open instruction-tuned llm, 2023. URL https://www.databricks.com/blog/2023/04/ 12/dolly-first-open-commercially-viable-instruction-tuned-llm. Marc-Alexandre Côté, Akos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al.
2308.03688#31
2308.03688#33
2308.03688
[ "2204.02311" ]
2308.03688#33
AgentBench: Evaluating LLMs as Agents
Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, Held in Con- junction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers 7, pp. 41â 75. Springer, 2019. Edward De Bono. Lateral thinking. New York, pp. 70, 1970. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171â 4186, 2019. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320â 335, 2022.
2308.03688#32
2308.03688#34
2308.03688
[ "2204.02311" ]
2308.03688#34
AgentBench: Evaluating LLMs as Agents
Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248â 264, 1972. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems, 35: 18343â 18362, 2022.
2308.03688#33
2308.03688#35
2308.03688
[ "2204.02311" ]
2308.03688#35
AgentBench: Evaluating LLMs as Agents
LR Ford Jr and DR FuË lkerson. Flows in networks. 1962. Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, An- uoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, et al. The gem benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pp. 96â
2308.03688#34
2308.03688#36
2308.03688
[ "2204.02311" ]
2308.03688#36
AgentBench: Evaluating LLMs as Agents
120. Association for Computational Linguistics, 2021. Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April, 1, 2023. Yu Gu and Yu Su. ArcaneQA: Dynamic program induction and contextualized encoding for knowledge base question answering. In Proceedings of the 29th International Conference on Computational Linguistics, pp. 1718â
2308.03688#35
2308.03688#37
2308.03688
[ "2204.02311" ]
2308.03688#37
AgentBench: Evaluating LLMs as Agents
1731, Gyeongju, Republic of Korea, October 2022. Inter- national Committee on Computational Linguistics. URL https://aclanthology.org/ 2022.coling-1.148. 11 # Technical Report (v0.2) Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. Beyond i.i.d.: Three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021. ACM, apr 2021. doi: 10.1145/3442381.3449992. URL https: //doi.org/10.1145%2F3442381.3449992.
2308.03688#36
2308.03688#38
2308.03688
[ "2204.02311" ]
2308.03688#38
AgentBench: Evaluating LLMs as Agents
Yu Gu, Xiang Deng, and Yu Su. Donâ t generate, discriminate: A proposal for grounding language models to real-world environments. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4928â 4949, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology.org/ 2023.acl-long.270. Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre Côté, and Xingdi Yuan. Interac- tive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 7903â 7910, 2020. Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938, 2021a. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021b. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. Amy K Hoover, Julian Togelius, Scott Lee, and Fernando de Mesentier Silva. The many ai challenges of hearthstone. KI-Künstliche Intelligenz, 34:33â 43, 2020. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â 9147. PMLR, 2022.
2308.03688#37
2308.03688#39
2308.03688
[ "2204.02311" ]
2308.03688#39
AgentBench: Evaluating LLMs as Agents
Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1821â 1831, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1167. URL https: //aclanthology.org/P17-1167. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly In Proceedings of the 55th Annual supervised challenge dataset for reading comprehension. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601â 1611, 2017. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The nethack learning environment. Advances in Neural Information Processing Systems, 33:7671â 7684, 2020. # LAION. Open-assistant. https://github.com/LAION-AI/Open-Assistant, 2023. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092â 1097, 2022. 12 Technical Report (v0.2)
2308.03688#38
2308.03688#40
2308.03688
[ "2204.02311" ]
2308.03688#40
AgentBench: Evaluating LLMs as Agents
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022. Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D Ernst.
2308.03688#39
2308.03688#41
2308.03688
[ "2204.02311" ]
2308.03688#41
AgentBench: Evaluating LLMs as Agents
Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018. Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023. Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self- supervised learning: Generative or contrastive. IEEE transactions on knowledge and data engi- neering, 35(1):857â 876, 2021. Pattie Maes.
2308.03688#40
2308.03688#42
2308.03688
[ "2204.02311" ]
2308.03688#42
AgentBench: Evaluating LLMs as Agents
Agents that reduce work and information overload. Commun. ACM, 37:30â 40, 1994. Dirk Merkel et al. Docker: lightweight linux containers for consistent development and deployment. Linux j, 239(2):2, 2014. Yohei Nakajima. Babyagi. Python. https://github. com/yoheinakajima/babyagi, 2023. Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry´sci´nski, Nick Schoelkopf, Riley Kong, Xiangru Tang, Murori Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, and Dragomir Radev.
2308.03688#41
2308.03688#43
2308.03688
[ "2204.02311" ]
2308.03688#43
AgentBench: Evaluating LLMs as Agents
Fetaqa: Free-form table question answering, 2021. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, 2023. OpenAI. Introducing chatgpt, 2022. URL https://openai.com/blog/chatgpt. R OpenAI. Gpt-4 technical report. arXiv, pp. 2303â 08774, 2023. Philip Osborne, Heido Nõmm, and André Freitas.
2308.03688#42
2308.03688#44
2308.03688
[ "2204.02311" ]
2308.03688#44
AgentBench: Evaluating LLMs as Agents
A survey of text games for reinforcement learning informed by natural language. Transactions of the Association for Computational Linguistics, 10: 873â 887, 2022. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. Joon Sung Park, Joseph C.
2308.03688#43
2308.03688#45
2308.03688
[ "2204.02311" ]
2308.03688#45
AgentBench: Evaluating LLMs as Agents
Oâ Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. ArXiv, abs/2304.03442, 2023. Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1470â 1480, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1142. URL https://aclanthology.org/P15-1142. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gómez Colmenarejo, Alexander Novikov, Gabriel Barth-maron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al.
2308.03688#44
2308.03688#46
2308.03688
[ "2204.02311" ]
2308.03688#46
AgentBench: Evaluating LLMs as Agents
A generalist agent. Transactions on Machine Learning Research, 2022. Toran Bruce Richards. Auto-gpt: An autonomous gpt-4 experiment, 2023. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. 13 Technical Report (v0.2)
2308.03688#45
2308.03688#47
2308.03688
[ "2204.02311" ]
2308.03688#47
AgentBench: Evaluating LLMs as Agents
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2022. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al.
2308.03688#46
2308.03688#48
2308.03688
[ "2204.02311" ]
2308.03688#48
AgentBench: Evaluating LLMs as Agents
Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. John R. Searle. Speech acts: An essay in the philosophy of language. Language, 46:217, 1970. Bokui Shen, Fei Xia, Chengshu Li, Roberto Martà n-Martà n, Linxi Fan, Guanzhi Wang, Claudia Pérez- Dâ Arpino, Shyamal Buch, Sanjana Srivastava, Lyne Tchapmi, et al. igibson 1.0: A simulation In 2021 IEEE/RSJ International environment for interactive tasks in large realistic scenes. Conference on Intelligent Robots and Systems (IROS), pp. 7520â 7527. IEEE, 2021. Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning, pp. 3135â 3144. PMLR, 2017. Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks.
2308.03688#47
2308.03688#49
2308.03688
[ "2204.02311" ]
2308.03688#49
AgentBench: Evaluating LLMs as Agents
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10740â 10749, 2020a. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations, 2020b. Paul Sloane. Lateral thinking puzzlers. Sterling Publishing Company, Inc., 1992. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martà n-Martà n, Fei Xia, Kent Elliott Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, Karen Liu, et al.
2308.03688#48
2308.03688#50
2308.03688
[ "2204.02311" ]
2308.03688#50
AgentBench: Evaluating LLMs as Agents
Behavior: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In Conference on Robot Learning, pp. 477â 490. PMLR, 2022. Yu Su, Huan Sun, Brian M. Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng In Jian Su, Xavier Yan. On generating characteristic-rich question sets for QA evaluation. Carreras, and Kevin Duh (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pp. 562â 572. The Association for Computational Linguistics, 2016. doi: 10.18653/v1/d16-1054. URL https://doi.org/10.18653/v1/d16-1054. Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 641â 651, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/ N18-1059. URL https://aclanthology.org/N18-1059. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149â 4158, 2019. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.
2308.03688#49
2308.03688#51
2308.03688
[ "2204.02311" ]
2308.03688#51
AgentBench: Evaluating LLMs as Agents
Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 14 Technical Report (v0.2) Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: A reinforcement learning platform for android. arXiv preprint arXiv:2105.13231, 2021.
2308.03688#50
2308.03688#52
2308.03688
[ "2204.02311" ]
2308.03688#52
AgentBench: Evaluating LLMs as Agents
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 673â 683, 2019. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman.
2308.03688#51
2308.03688#53
2308.03688
[ "2204.02311" ]
2308.03688#53
AgentBench: Evaluating LLMs as Agents
Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019. Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. Openchat: Advancing open-source language models with mixed-quality data, 2023a. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar.
2308.03688#52
2308.03688#54
2308.03688
[ "2204.02311" ]
2308.03688#54
AgentBench: Evaluating LLMs as Agents
Voyager: An open-ended embodied agent with large language models. ArXiv, abs/2305.16291, 2023b. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023c. Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023d. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022a. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022b. Michael Wooldridge and Nicholas R Jennings. Intelligent agents: Theory and practice. The knowledge engineering review, 10(2):115â 152, 1995. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan.
2308.03688#53
2308.03688#55
2308.03688
[ "2204.02311" ]
2308.03688#55
AgentBench: Evaluating LLMs as Agents
Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â 20757, 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023a. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023b.
2308.03688#54
2308.03688#56
2308.03688
[ "2204.02311" ]
2308.03688#56
AgentBench: Evaluating LLMs as Agents
15 Technical Report (v0.2) Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, 2023. Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103, 2017. Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyuan Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Y. Qiao, Zhaoxiang Zhang, and Jifeng Dai.
2308.03688#55
2308.03688#57
2308.03688
[ "2204.02311" ]
2308.03688#57
AgentBench: Evaluating LLMs as Agents
Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory. ArXiv, abs/2305.17144, 2023. 16 Technical Report (v0.2) # Part I Appendix # Table of Contents A Framework . . A.1 Traditional Evaluation Frameworks . A.2 Our Designed Evaluation Framework . . A.3 Implementation of Max-Flow Algorithm . B Operating System . B.1 Dataset details . B.2 Actions . . . B.3 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Database C.1 Dataset Details . . C.2 Data Augmentation . . C.3 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Knowledge Graph D.1 Dataset Details . . D.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . E Digital Card Game E.1 Dataset Details . . E.2 The Attributes of Fish . . E.3 Prompt Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F Lateral Thinking Puzzles . F.1 Dataset Details . . . F.2 Evaluation on LTP System . . F.3 LTP Game Progress and Termination . . F.4 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . G House-holding . G.1 Dataset Details . G.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . H Web Shopping . H.1 Dataset Details . H.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . I Web Browsing I.1 Dataset Details . Prompt Example. I.2 . . . . . . . . . . . . . . . . . . . . . . . J Detailed Analysis J.1 Validity Analysis of Execution Outcomes . . . . . .
2308.03688#56
2308.03688#58
2308.03688
[ "2204.02311" ]