doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.05481
39
7 [8] Hai Lan, Zhifeng Bao, and Yuwei Peng. 2020. An Index Advisor Using Deep Reinforcement Learning. In CIKM. 2105–2108. [9] Gabriel Paludo Licks, Júlia Mara Colleoni Couto, Priscilla de Fátima Miehe, Re- nata De Paris, Duncan Dubugras A. Ruiz, and Felipe Meneguzzi. 2020. SmartIX: A database indexing agent based on reinforcement learning. Appl. Intell. 50, 8 (2020), 2575–2588. [10] Ping Liu, Shenglin Zhang, Yongqian Sun, Yuan Meng, Jiahai Yang, and Dan Pei. 2020. FluxInfer: Automatic Diagnosis of Performance Anomaly for Online Database System. In 39th IEEE International Performance Computing and Commu- nications Conference, IPCCC 2020, Austin, TX, USA, November 6-8, 2020. IEEE, 1–8. https://doi.org/10.1109/IPCCC50635.2020.9391550
2308.05481#39
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
39
[20] Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. Api-bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244, 2023. [21] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. [22] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. [23] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023.
2308.05696#39
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
40
[11] Xiaoze Liu, Zheng Yin, Chao Zhao, Congcong Ge, Lu Chen, Yunjun Gao, Dimeng Li, Ziting Wang, Gaozhong Liang, Jian Tan, and Feifei Li. 2022. PinSQL: Pinpoint Root Cause SQLs to Resolve Performance Issues in Cloud Databases. In 38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12, 2022. IEEE, 2549–2561. https://doi.org/10.1109/ICDE53745.2022.00236 [12] Xianglin Lu, Zhe Xie, Zeyan Li, Mingjie Li, Xiaohui Nie, Nengwen Zhao, Qingyang Yu, Shenglin Zhang, Kaixin Sui, Lin Zhu, and Dan Pei. 2022. Generic and Robust Performance Diagnosis via Causal Inference for OLTP Database Systems. In 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing, CCGrid 2022, Taormina, Italy, May 16-19, 2022. IEEE, 655–664. https://doi.org/10.1109/ CCGrid54584.2022.00075
2308.05481#40
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
40
[24] Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language models. arXiv preprint arXiv:2305.16264, 2023. [25] Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023. [26] OpenAI. Introducing chatgpt. 2022. [27] R OpenAI. Gpt-4 technical report. arXiv, pages 2303–08774, 2023. [28] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
2308.05696#40
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
41
[13] Minghua Ma, Zheng Yin, Shenglin Zhang, and et al. 2020. Diagnosing Root Causes of Intermittent Slow Queries in Large-Scale Cloud Databases. Proc. VLDB Endow. 13, 8 (2020), 1176–1189. https://doi.org/10.14778/3389133.3389136 [14] Yuxi Ma, Chi Zhang, and Song-Chun Zhu. 2023. Brain in a Vat: On Miss- ing Pieces Towards Artificial General Intelligence in Large Language Mod- els. CoRR abs/2307.03762 (2023). https://doi.org/10.48550/arXiv.2307.03762 arXiv:2307.03762 [15] R. Malinga Perera, Bastian Oetomo, Benjamin I. P. Rubinstein, and Renata Borovica-Gajic. 2021. DBA bandits: Self-driving index tuning under ad-hoc, analytical workloads with safety guarantees. In ICDE. 600–611. [16] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, and et al. 2023. Com- municative Agents for Software Development. arXiv preprint arXiv:2307.07924 (2023).
2308.05481#41
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
41
[29] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. [30] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2020. [31] Mrinmaya Sachan and Eric Xing. Easy questions first? a case study on curriculum learning for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 453–463, 2016. [32] Vighnesh Shiv and Chris Quirk. Novel positional encodings to enable tree-based transformers. Advances in neural information processing systems, 32, 2019. [33] Valery Solovyev, Marina Solnyshkina, Vladimir Ivanov, and Ivan Rygaev. Computing syntactic parameters for automated text complexity assessment. In CEUR Workshop Proceedings, volume 2475, pages 62–71, 2019.
2308.05696#41
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
42
[17] Yujia Qin, Shengding Hu, Yankai Lin, and et al. 2023. Tool learning with founda- tion models. arXiv preprint arXiv:2304.08354 (2023). [18] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. Tool- LLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. arXiv:cs.AI/2307.16789 [19] Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333–389. [20] James Turnbull. 2018. Monitoring with Prometheus. Turnbull Press. [21] Gary Valentin, Michael Zuliani, Daniel C. Zilio, Guy M. Lohman, and Alan Skelley. 2000. DB2 Advisor: An Optimizer Smart Enough to Recommend Its Own Indexes. In ICDE. 101–110.
2308.05481#42
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
42
[34] Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023. 11 [35] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. [36] Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. Transcending scaling laws with 0.1% extra compute. arXiv preprint arXiv:2210.11399, 2022.
2308.05696#42
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
43
[22] Kyu-Young Whang. 1987. Index Selection in Relational Databases. Foundations of Data Organization (1987), 487–500. [23] Wentao Wu, Chi Wang, Tarique Siddiqui, Junxiong Wang, Vivek R. Narasayya, Surajit Chaudhuri, and Philip A. Bernstein. 2022. Budget-aware Index Tuning with Reinforcement Learning. In SIGMOD Conference. 1528–1541. [24] Dong Young Yoon, Ning Niu, and Barzan Mozafari. 2016. DBSherlock: A Perfor- mance Diagnostic Tool for Transactional Databases. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD Conference 2016, San Francisco, CA, USA, June 26 - July 01, 2016, Fatma Özcan, Georgia Koutrika, and Sam Madden (Eds.). ACM, 1599–1614. https://doi.org/10.1145/2882903.2915218 [25] Xuanhe Zhou, Chengliang Chai, Guoliang Li, and Ji Sun. 2020. Database meets artificial intelligence: A survey. IEEE Transactions on Knowledge and Data Engi- neering 34, 3 (2020), 1096–1116.
2308.05481#43
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
43
[37] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023. [38] Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu. OpenChat: Advancing Open-source Language Models with Imperfect Data, 7 2023. [39] Xiangli Wang, Yi Zhang, Yusuke Miyao, Takuya Matsuzaki, and Jun’ichi Tsujii. Deep context- free grammar for chinese with broad-coverage. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 11–19, 2013. [40] Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023.
2308.05696#43
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
44
[26] Xuanhe Zhou, Luyang Liu, Wenbo Li, Lianyuan Jin, Shifu Li, Tianqing Wang, and Jianhua Feng. 2022. AutoIndex: An Incremental Index Management System for Dynamic Workloads. In ICDE. 2196–2208. [27] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large Language Models Are Human-Level Prompt Engineers. (2022). arXiv:2211.01910 http://arxiv.org/abs/2211.01910 # A APPENDIX - PROMPTS
2308.05481#44
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
44
[41] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022. [42] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705, 2022. [43] Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966, 2023.
2308.05696#44
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
45
Prompts Chief_dba_format_prompt You are in a company whose databases meet anomalies and it depends on you to collaborate with other agents to diagnose the root causes. ${role_description} # Rules and Format Instructions for Response - Must listen and respond to the user's advice in the following format: hought: I now know the advice of the user, and i need to consider it during diagnosis ction: Speak ction Input: ({"diagnose": response to the advice, "solution": [], "knowledge": ""}) You can detect and diagnose anomaly as follows to use tool: hought: (your thought) ction: (an action name, it can be one of [obtain_anomaly_time, Speak]) ction Input: (argument for the action) irst, you need to call the tool api to get the start and end time of an anomaly hought: I need to obtain the start and end time of the anomaly by calling the tool api ction: obtain_anomaly_time ction Input: {"input": "json dict string"} bservation: {"start_time":"xxxx","end_time": "xxxx"} fter obtaining the start and end time of the anomaly, announce it with the following format: hought: I now know the start and end time of the
2308.05481#45
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
45
[44] Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, et al. Polylm: An open source polyglot large language model. arXiv preprint arXiv:2307.06018, 2023. [45] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. [46] Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. Wider and deeper llm networks are fairer llm evaluators. arXiv preprint arXiv:2308.01862, 2023. [47] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
2308.05696#45
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
46
"xxxx"} fter obtaining the start and end time of the anomaly, announce it with the following format: hought: I now know the start and end time of the anomaly, and i need to report it to agents Action: Speak Action Input: ({"diagnose": the start and end time of the anomaly you found, "solution": [], "knowledge": ""}) After all the agents have announced the root causes they found, you should summarize all the mentioned root causes and optimization solutions point by point: Thought: I now know the root causes and optimization solutions from other agents, and i need to conclude them point by point Action: Speak Action Input: ({"diagnose": The identified root causes of the anomaly are ..., "solution": The suggested optimization solutions are ..., "knowledge": ""}) Here is the conversation history ${chat_history} Here is the execution log of tools ${tool_observation} - Once an agent has announced the root causes he found, it is your responsibility to memorize the root causes. After that, continue to encourage other agents to diagnose. - When no one speaks in the last round ([Silence] appears in the end of history), you should summarize root causes and optimization solutions point by point. Pay attention to
2308.05481#46
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
46
[48] Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. [49] Yikai Zhou, Baosong Yang, Derek F Wong, Yu Wan, and Lidia S Chao. Uncertainty-aware curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the association for computational linguistics, pages 6934–6944, 2020. [50] Qingqing Zhu, Xiuying Chen, Pengfei Wu, JunFei Liu, and Dongyan Zhao. Combining curriculum learning and knowledge distillation for dialogue generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1284–1295, 2021. 12
2308.05696#46
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
47
one speaks in the last round ([Silence] appears in the end of history), you should summarize root causes and optimization solutions point by point. Pay attention to the response format instructions, and strictly follow the above rules! Based on the above history, what will you, ${agent_name}, do next? CPU_agent_format_prompt You are in a company whose databases meet anomalies. Follow the chief DBA's instructions to diagnose the root causes. ${role_description} # Rules and Format Instructions for Response
2308.05481#47
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
49
- During diagnosis, you have access to the following tools: ${tools} - You can respond as follows to use tool: Thought: (your thought) Action: (an action name, it can be one of [whether_is_abnormal_metric, CPU_diagnosis_agent, Speak], pay attention to the capitalization) Action Input: (argument for the action) You can first determine abnormal metrics by using the tools, and use the following format: Thought: Now that I have obtained the start and end time of the anomaly, check whether the CPU usage is abnormal during that time period. Action: whether_is_abnormal_metric Action Input: {"start_time": 1684646375, "end_time": 1684646378, "metric_name": "cpu_usage"} Next you must diagnose root causes by using the tools, and must use the following format (any other choice is not allowed): Thought: The CPU usage is abnormal, so I need to diagnose the cause of the anomaly using cpu_diagnosis_agent. Action: cpu_diagnosis_agent Action Input: {"start_time": 1684646375, "end_time": 1684646378} After you have got the observation from cpu_diagnosis_agent, announce it to the chief
2308.05481#49
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
50
1684646375, "end_time": 1684646378} After you have got the observation from cpu_diagnosis_agent, announce it to the chief DBA, and use the following format: Thought: I now know the root cause of the anomaly, and i need to report it to the chief DBA Action: Speak Action Input: ({"diagnose": the root causes you found, "solution": the optimization solutions for the root causes splitted by ' ', "knowledge": the diagnosis knowledge you used}) (Note the memory agent owns similar prompt, and we do not show it to avoid redundancy.) Summary_prompt Progressively summarize the lines of a record that you uses tools, which contains inputs for certain tools and the results returned by these tools. Based on the current summary, summarize from the record the goals that you intended to solve with each tool call, add it onto the previous summary, and return a new summary. EXAMPLE Current summary: New lines: Thought: Now that I have the start and end time of the anomaly, I need to diagnose the causes of the anomaly Action: whether_is_abnormal_metric Action Input: {"start_time": 1684600070, "end_time": 1684600074,
2308.05481#50
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
51
causes of the anomaly Action: whether_is_abnormal_metric Action Input: {"start_time": 1684600070, "end_time": 1684600074, "metric_name": "cpu_usage"} Observation: "The metric is abnormal" New summary: - I now know the start and end time of the anomaly. - I searched for whether_is_abnormal_metric, and I now know that the CPU usage is abnormal. END OF EXAMPLE Now, try to summarize the following record. Current summary: ${summary} New lines: ${new_lines} New summary:
2308.05481#51
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
53
Test Case (WORKLOAD_CONTENTION, POOR_JOIN_PERFORMANCE) (First Round) [Chief DBA] The anomaly started at 2023-08-05 23:38:30 and ended at 2023-08-05 23:43:30. [API Call] obtain_anomaly_time. [CPU Agent] The CPU usage is abnormal, so I need to diagnose the cause of the anomaly. [API Call] whether_is_abnormal_metric. [CPU Agent] The CPU relevant metric values are as follows: 1. node_scrape_collector_duration_seconds: This metric measures the duration of data collection from the node exporter. The value of 5.7218e-05 indicates that the data collection process is relatively fast. 2. node_procs_running: This metric represents the number of processes currently executing on the system. A value of 18.0 suggests that there are 18 processes running at the moment. ... 5. node_load1, node_load5, node_load15: These metrics represent the average system load over the last 1, 5, and 15 minutes, respectively. The values of 7.33, 2.4, and 0.89 suggest that the system is heavily loaded. [API Call]
2308.05481#53
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
54
5, and 15 minutes, respectively. The values of 7.33, 2.4, and 0.89 suggest that the system is heavily loaded. [API Call] obtain_cpu_metrics. [Matched Knowledge] This diagnosis was based on several CPU relevant metrics obtained from Prometheus like node_scrape_collector_duration_seconds, node_procs_running, node_procs_blocked, node_entropy_available_bits, node_load1, node_load5, and node_load15. [CPU Agent] The high CPU usage seems to be caused by the high value of node_load1. It indicates that the CPU has been heavily utilized recently. [API Call] knowledge_matching. [Matched Knowledge] {"cause_name": "workload_contention", "desc": "The function checks for several potential causes of contention, including abnormal CPU and memory resource usage, insufficient space in the database data directory, and excessive connections or thread pool usage. ... If any of these issues are detected, the function provides a detailed report of the problem and suggests potential solutions. If no issues are found, the function returns "not a root cause".", "metrics": "... - max_connections - db_cpu_usage -
2308.05481#54
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
55
potential solutions. If no issues are found, the function returns "not a root cause".", "metrics": "... - max_connections - db_cpu_usage - db_mem_usage - disk_usage - connection - thread_pool_rate'}. [Solution] 1. Monitor the number of running processes and try to reduce unnecessary ones. 2. Look for resource contention and try to optimize resource allocation. [Memory Agent] The memory metric is abnormal. [API Call] whether_is_abnormal_metric. [Memory Agent] We first obtain memory metric values: - The node_memory_Inactive_anon_bytes metric value suggests a significant amount of inactive anonymous memory. - The node_memory_MemFree_bytes metric value indicates a considerable amount of free memory available. - The node_memory_Dirty_bytes metric value indicates a moderate amount of dirty memory. These memory metric values suggest that there may be a memory leak or inefficient memory management within the system. It might be worth investigating the memory allocation and deallocation processes, ensuring that resources are effectively managed. [API Call] obtain_memory_metrics. [Memory Agent] We next obtain slow queries from the pg_stat_statements view. There are potential causes that can be
2308.05481#55
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
56
Call] obtain_memory_metrics. [Memory Agent] We next obtain slow queries from the pg_stat_statements view. There are potential causes that can be analyzed: 1. Poor join performance: One of the slow queries involves multiple table joins (VSELECT MIN(mi.info) ... FROM cast_info AS ci, info_type AS it1, info_type AS it2, keyword AS k, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, name AS n, title AS t ...\’). Poor join performance can result in high memory usage if the join operation involves a large amount of data or an inefficient join operator is used. It is recommended to optimize the SQL structure and check if the join operations can be optimized by using appropriate indexes or join operators. [API Call] obtain_slow_queries; knowledge_matching. [Matched Knowledge] {"cause_name": "poor_join_performance’, "desc": "There are four main situations: 1) when the GUC parameter ‘enable_hashjoin’ is set to ‘off’, which can result in the optimizer choosing NestLoop or other join operators even when HashJoin would be more suitable;
2308.05481#56
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
57
is set to ‘off’, which can result in the optimizer choosing NestLoop or other join operators even when HashJoin would be more suitable; 2) when the optimizer incorrectly chooses the NestLoop operator, even when ’set_hashjoin’ is on; 3) when the join operation involves a large amount of data, which can lead to high execution costs; and 4) when the cost of the join operator is expensive. ....’, "metrics": "- total_cost - cost_rate_threshold - nestloop_rows_threshold - large_join_threshola"}; {"cause_name": "missing_index", ...} [Solution] Optimize the SQL structure of the query ".." and check if the join operations can be optimized by using appropriate indexes or join operators.
2308.05481#57
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
59
[Memory Agent] Other potential causes: Without additional information about the system and its con- figuration, it is difficult to pinpoint a specific root cause of the high memory usage. Other potential causes may include inefficient query plans, lack of memory optimization settings, or memory leaks in the application or database engine. To further investigate and identify the root cause, it is advisable to perform the following steps: 1. Analyze the resource usage pattern over time to identify any patterns or trends related to the memory usage. 2. Check the configuration settings and optimization parame- ters of the database system to ensure they are properly set. 3. Monitor the system for any memory leaks or inefficient query plans that may be causing the high memory usage. 4. Review the application code and queries to identify any potential optimizations or rewrite queries to improve their efficiency. [API Call] N/A. [Solution] 1. Analyze the resource usage pattern over time to identify any patterns or trends related to the memory usage. 2. Check the configuration settings and optimization parameters of the database system to ensure they are properly set. 3. Monitor the system for any memory leaks or inefficient query
2308.05481#59
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
60
Check the configuration settings and optimization parameters of the database system to ensure they are properly set. 3. Monitor the system for any memory leaks or inefficient query plans that may be causing the high memory usage. 4. Review the application code and queries to identify any potential optimizations or rewrite queries to improve their efficiency. [Chief DBA] The identified root causes of the anomaly are high CPU usage due to a high number of running processes, processes waiting for resources, and high load averages. It is also uncertain whether poor join performance is causing high memory usage. [API Call] N/A. [Solution] 1. In order to reduce CPU utilization, it is necessary to investigate the specific processes involved and their resource requirements. 2. For the high memory usage, we need to gather more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant configuration settings. Examining the memory usage patterns and query execution plans might provide further insights. (Second Round) ~ further insights into memory problems [Memory Agent] Chief DBA has suggested that gathering more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant
2308.05481#60
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
61
Agent] Chief DBA has suggested that gathering more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant configura- tion settings. This advice is indeed valuable, and I should consider it during the diagnosis. [API Call] N/A. [Memory Agent] Based on the memory metric values and slow queries provided, it is difficult to di- rectly determine the root cause of the high memory usage. However, we can analyze potential causes based on the information given. There is a significant amount of memory being used, particularly by node_memory_Dirty_bytes. Additionally, there are 11 active PostgreSQL activities. This indicates that there may be a memory leak or excessive memory usage in one or more queries or processes. The slow queries provided involve multiple table joins and aggregation functions (\’MIN\). They may be resource- intensive and contribute to the high memory usage. [API Call] knowledge_matching. [Matched Knowledge] {"cause_name": "poor_join_performance’, ...} [Solution] 1. Analyze the query plans, check for inefficient join operations, and monitor the memory usage over time. 2. Review the system configuration and resource constraints may provide insights into
2308.05481#61
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05481
62
the query plans, check for inefficient join operations, and monitor the memory usage over time. 2. Review the system configuration and resource constraints may provide insights into potential optimizations or improvements. 3. Consult with a database administrator or expert for a more accurate analysis and resolution. [Chief DBA] Based on our analysis, the root causes of the anomaly seem to be high CPU usage due to excessive running processes, poor join performance and inefficient query structure. The high CPU usage is mainly due to a high number of running processes and/or a high CPU load. There are 11 active PostgreSQL activities indicating a memory leak or excessive memory usage in one or more queries or processes. The slow queries provided involve multiple table joins and aggregation functions which may be resource-intensive and contribute to the high memory usage. Poor join performance and inefficient query structure can also lead to unnecessary memory usage. [API Call] N/A. [Solution] 1. Further investigation and analysis of the running processes and system workload that cause severe contention. 2. Review the structure of the slow queries and optimize them to reduce memory consumption. 3. Investigate join operations for possible inefficiencies and optimize as needed. 3. Gather more information about overall system performance,
2308.05481#62
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.04026
0
3 2 0 2 g u A 8 ] I A . s c [ 1 v 6 2 0 4 0 . 8 0 3 2 : v i X r a AgentSims: An Open-Source Sandbox for Large Language Model Evaluation Jiaju Lin1,2, Haoran Zhao1,3 ∗, Aochi Zhang1, Yiting Wu1,4, Huqiuyue Ping1,5, Qin Chen6 1PTA Studio 2 Pennsylvania State University, 3 Beihang University, 4 Sun Yat-sen University, 5Zhejiang University, 6East China Normal University [email protected] [email protected] and [email protected] # Abstract
2308.04026#0
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.03983
1
# Abstract Large Language Model (LLM) based Genera- tive AI systems have seen significant progress in recent years. Integrating a knowledge re- trieval architecture allows for seamless inte- gration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direc- tion that explicitly separates roles of LLMs and retrievers in context interpretation and knowl- edge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly in- terface to these sophisticated advancements to the machine learning community. SimplyRe- trieve features a GUI and API based RCG plat- form, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can ex- plore the potential of RCG for improving gen- erative AI performance while maintaining pri- vacy standards. The tool is available at https: //github.com/RCGAI/SimplyRetrieve with an MIT license. 1 # 1 Introduction Generative Large Dense Language Retriever Model Context Knowledge Interpretation Memorization Degree of Role Separation Looe Retrieval- Retrieval- Augmented Centric Retrieval- Retrieval- Augmented Centric
2308.03983#1
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
1
# Abstract With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open ques- tion. Existing evaluation methods suffer from following shortcomings: (1) constrained evalu- ation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task- based evaluation, where LLM agents complete tasks in a simulated environment, is a one- for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastruc- ture for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mecha- nisms, i.e. memory, planning and tool-use sys- tems, by a few lines of codes. Our demo is available at https://agentsims.com . 1 # 1 Introduction
2308.04026#1
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
1
# Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu https://github.com/Gentopia-AI # Abstract Augmented Language Models (ALMs) em- power large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to vary- ing degrees, are deficient in the following crit- ical features: flexible customization, collab- orative democratization, and holistic evalua- tion. We present Gentopia, an ALM frame- work enabling flexible customization of agents through simple configurations, seamlessly in- tegrating various language models, task for- mats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish GentPool, a public platform enabling the reg- istration and sharing of user-customized agents. Agents registered in GentPool are composable such that they can be assembled together for agent collaboration, advancing the democrati- zation of artificial intelligence. To ensure high- quality agents, GentBench, an integral compo- nent of GentPool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release Gentopia on Github1 and will continuously move forward.
2308.04030#1
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
2
Retrieval- Retrieval- Augmented Centric Figure 1: Retrieval-Centric Generation (RCG) approach presents an innovative concept that leverages the mutu- ally beneficial interaction between LLMs and retrievers for more efficient context interpretation and knowledge memorization. Increased clarity in role-separation be- tween context interpretation and knowledge memoriza- tion can potentially boost the performance of generative AI systems. effective in adapting these models to specific do- mains for various tasks (Brown et al., 2020). This has given rise to the field of prompt-engineering. Additionally, Chain-of-Thought (Wei et al., 2022b; Kojima et al., 2022) decomposes a complex task assigned into manageable steps, thereby expand- ing the capabilities of generative-based language models even further.
2308.03983#2
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
2
1 # 1 Introduction LLMs have revolutionized Natural Language Pro- cessing (NLP) and beyond. They demonstrate great potential in few-shot learning(Brown et al., 2020), code generation(Nijkamp et al., 2023), rea- soning(Yao et al., 2023) and other tasks. Further- more, LLM powered autonomous agents(Weng, 2023) are widely applied in solving complex prob- lems, like multimodal generation(Shen et al., 2023), software developing(Qian et al., 2023) and social simulating (Park et al., 2023). Although LLMs have reformed the paradigm of NLP, the problem of evaluation keeps haunt- ing this field. Old benchmarks become out-of- date. Since LLMs achieve human-level Natural Language Understanding (NLU) and Natural Lan- guage Generation (NLG) abilities(OpenAI, 2023). To address the pressing need for novel benchmarks, the NLP community has introduced an array of fresh evaluation tasks and datasets, encompassing a # ∗∗ Corresponding author.
2308.04026#2
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
2
Several projects and frameworks have been proposed to build tool-Augmented Language including Au- Models (ALMs), or "agents", toGPT (Richards, 2023), SuperAGI (Kondi, 2023), HuggingGPT (Shen et al., 2023), GPT- Engineer (Osika, 2023), LangChain (Chase, 2023), Semantic Kernel (Callegari, 2023), and MiniChain (Rush, 2023). Each of these methods is deficient, to varying degrees, in the following critical features. • Adaptive Customization: Many are designed for a single set of tasks without extensive sup- port in customization, or they involve redun- dant and boilerplate implementation that un- necessarily complicates agent tuning. • Tool-augmented NLP Benchmark: A user- customized agent, before registration, is ex- pected to go through a thorough evaluation to ensure its quality. However, there is a lack of comprehensive benchmarks designed for agent evaluation in the aspects of efficiency, safety, robustness, etc. 1 # Introduction
2308.04030#2
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
3
Generative-based Natural Language Processing (NLP) has witnessed significant progress (Brown et al., 2020) in recent years. With the introduction of Transformer (Vaswani et al., 2017) architecture, the possibility of developing high-accuracy lan- guage models that can perform tasks such as text generation, text summarization and language trans- lation has become a reality. These models (Brown et al., 2020; Chowdhery et al., 2022), when scaled up to billions of parameters (Wei et al., 2022a), have shown remarkable improvements in text gen- eration tasks such as zero-shot inference, popu- larized the term Generative AI. Instead of model fine-tuning, careful design of prompts has proven
2308.03983#3
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04030
3
1 # Introduction There is a burgeoning trend in research towards augmenting large language models (LLMs) with external tools, enabling them to access up-to-date databases (Jiang et al., 2023; Pan et al., 2023), per- form arithmetic operations (Imani et al., 2023), navigate websites (Gur et al., 2023), develop soft- ware (Wu, 2023), etc. This integration of tools marks a departure from traditional language mod- eling, heralding a new era of intelligent agents ca- pable of interacting with the real world. 1https://github.com/Gentopia-AI/Gentopia. All mentioned works are under MIT license. Check our demo https://www.youtube.com/watch?v=7dZ3ZvsI7sw at and homepage at https://gentopia-ai.github.io/ Gentopia-AI-Homepage/. • Democratization: A platform where user- customized agents can be registered and shared is missing. This hinders the interaction and collaboration of various user-customized agents. Collaborative growth is a critical point toward safe and powerful intelligence.
2308.04030#3
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
4
Training large language models (LLMs) requires immense computational resources, often involv- ing thousands of high-end GPUs. Fine-tuning these models can also be challenging. Although prompt-engineering helped to reduce the need for fine-tuning, there was still noticeable instruction misalignment when interacting with a human user. To address this issue, techniques such as rein- forcement learning from human feedback (RLHF) (Christiano et al., 2017) have been explored to align the behavior of LLMs with human values (Ouyang et al., 2022; OpenAI, 2023). Additionally, QLoRA (Dettmers et al., 2023), combining low-rank adaptation technique (Hu et al., 2022) and quantization technique, has made it possible to fine-tune these models on individual developer’s hardware, mak- ing them more accessible to a wider range of users. Despite these advances, there are still limitations to the capacity of LLMs, and they do not inher- ently recognize information that was not present during training and fine-tuning. Memorization of factual knowledge in the long tail is also a chal- lenge (Mallen et al., 2023).
2308.03983#4
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
4
However, there are still many problems with these new benchmarks. 1) Evaluated abilities are limited by the task formats. Since a majority of these tasks adopt a single-turn QA format, they are insufficient to comprehensively evaluate vari- ous aspects of LLMs’ capabilities. For instance, they fail to assess the models’ proficiency in ad- hering to instructions in dialogue or mimicking human-like social interactions. 2) Benchmarks can be easily hacked. Avoiding the leakage of test set is of paramount importance when evaluate a model’s ability. Nonetheless, considering the amount of pretrained knowledge of LLM, it has become more and more inevitable to inadvertently mix test cases into the training set.(Gunasekar et al., 2023). 3) For open-ended QA, existing metrics are not objec- tive. Previous metrics for open-ended QA involve automatic metrics, and human-rating as subjective metrics(Zhou et al., 2023). In the LLM era, text seg- ment matching based metrics become out-of-date. To mitigate the high-costly issue of human-rating, today’s researchers employ well-aligned LLMs like GPT4 as automatic raters. Nevertheless, the most significant problem of this approach is that it can not evaluate super GPT4-level models, and LLMs are biased toward specific features (Wang et al., 2023b).
2308.04026#4
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
4
This paper proposes Gentopia, a lightweight and extensible framework for the research on ALMs. Gentopia allows practitioners to customize an agent with a single configuration file, greatly sim- plifying the process of building, tuning, sharing, and evaluating agents. Various language models, task formats, prompting modules, and plugins are integrated into a unified paradigm, without loss of flexibility for agent customization. In addi- tion, we believe the collaboration between user- customized agents can contribute to the democGentopia.Al (OF Gentopia 1Gg >» Framework for agent assembling Type Tool Se Vanilla, Rect, ReWOO, G a OpenAll (Function API), Prompt - 2% Ka) arXiv ~ - ys % . : Memory 8 LLM Short-term: scratch-pad : “My [>] OpenAl: GPT-4, GPT-3.5-turbo, Long-term: VectorDB (chroma, ...) oe ‘Open LM: llama, falcon, guanaco, External Agents GentPool Platform for agent sharing — Graders GatedGrader, SoreGrader, DojoGrader, InstructionGrader, CodeGrader, GentBench Benchmark for agent evaluation Reasoning © Math © Coding
2308.04030#4
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
5
Most recently, there has been growing interest in integrating external knowledge sources into LLMs for generating text (Borgeaud et al., 2022; Guu et al., 2020; Lewis et al., 2020). Similar approaches have also been proposed in solving computer vi- sion tasks (Nakata et al., 2022; Iscen et al., 2023). Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) architecture is an approach that en- hances the capabilities of LLMs by incorporating external data sources using a sparse or dense re- triever (Karpukhin et al., 2020), enabling the use of privately owned data without requiring retraining or fine-tuning the LLM (Chase, 2022). However, developing retrieval-augmented LLM-based gen- erative models is still in its early stages. Our pro- posed tool can help facilitate these developments.
2308.03983#5
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
5
Based on these observations, we suggest task- based evaluation for LLM benchmarks. Specifi- cally, given an artificial social-economic environ- ment, LLM-driven agents should achieve the pre- defined task goals to prove their abilities, just like humans accomplishing goals in real world or games to show their capacities. Task-based evaluation is 1 a one-for-all solution for current issues: 1) Task- based evaluation can test an LLM’s overall ability. The complexity of social simulation and adaptation far exceeds simple QA and can formulate more challenging tasks for LLMs. LLM agents need to be equipped with the ability from NLU to Theory of Mind (ToM) (Premack and Woodruff, 1978). 2) Task solving processes are less likely to be hacked. Different from unchanged test datasets whose for- mats can be easily mimicked and added to training data. Task settings are diversified and the emergent social behaviors and groups are less likely to be described and included in training corpus. 3) Task passing rate is an objective metric. Compared with popular rating methods by ChatGPT, the passing rate does not rely on any black-box rating process, i.e. deep neural networks or human brains, thus it is an objective and fair metric for the comparison between LLMs.
2308.04026#5
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.03983
6
Additionally, we introduce a new architec- tural concept called Retrieval-Centric Genera- tion (RCG), which builds upon the Retrieval- Augmented Generation approach by emphasizing the crucial role of the LLM in interpreting context and entrusting knowledge memorization to the re- triever component, putting greater importance on retriever, as depicted in Figure 1. By separating context interpretation from knowledge memoriza- tion, this approach has the potential to reduce the scale (Carlini et al., 2023) of the LLM required for generative tasks, leading to more efficient and inter- pretable results. Moreover, this approach may help mitigate hallucinations (Maynez et al., 2020) by limiting the scope of the LLM’s generation. Once we define RCG as above, we can re-define RAG that enables more permissible usage of LLM’s in- herent knowledge, whereas RCG prioritizes clear demarcations between context interpretation and knowledge memorization.
2308.03983#6
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
6
To all-around estimate LLMs’ capacities, we hope researchers from all fields take part in the de- velopment of evaluation tasks. However, a key ob- stacle to fostering a collaborative research commu- nity is the absence of a standard paradigm, an easy- to-use and extensible research platform. Previous works pursue the most efficient way to implement a sandbox while ignoring the need of non-specialist users. Besides, the poor readability further results in poor extensiblity and user churn. Moreover, the agents’ performance varies with different support systems, i.e. memory, planning and tool-use sys- tem. We need a standard implementation to ensure the reproducibility of experimental results. To this end, we introduce AgentSims, an inter- active, visualized, and program-based infrastruc- ture for curating evaluation tasks for LLMs. It creates an artificial town with various buildings and residents. The core objective of AgentSims is to streamline the task design process, eliminating hurdles that researchers from various backgrounds and programming proficiencies might encounter. • For researchers focusing on LLM, AgentSims is extendable and combinable to allow users to combine different plan, memory and learning systems to study the impacts and effectiveness of various system design.
2308.04026#6
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
6
Figure 1: An overview of Gentopia.AI, encapsulating following pivotal components: 1) Gentopia: a framework principally designed to assemble an agent instance from a YAML configuration file, composed of multiple pre-built agent components such as the LLM, tools, memory, and external agents; 2) GentPool: a platform engineered to facilitate the registration and sharing of specialized agents, seamlessly integrating GentBench, an ALM benchmark devised specifically for the comprehensive performance evaluation of agents. ratization of AI. Hence, GentPool, a platform for agent registration and sharing is established. Agents registered in GentPool can be hierarchi- cally assembled together, enabling the collabora- tion of multiple agents. GentPool is accompa- nied by a unique benchmark, GentBench, that can probe customized agents with a holistic evaluation in terms of safety, robustness, efficiency, multilin- gual capabilities, etc. Notably, it is flexible for users to customize the evaluation by configuration.
2308.04030#6
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
7
SimplyRetrieve is an open-source tool aimed at providing a localized, lightweight, and user- friendly interface to Retrieval-Centric Generation approach to the machine learning community. This tool encompasses a GUI and API based RCG platform, assisted by a Private Knowledge Base Con- structors and a Retrieval Tuning Module. Sim- plyRetrieve is designed to be simple and acces- sible to the community, as well as end-users. Our retrieval-centric platform incorporates multi- ple selectable knowledge bases featuring Mixtures- of-Knowledge-Bases (MoKB) mode and Explicit Prompt-Weighting (EPW) of retrieved knowledge base. By designing SimplyRetrieve with these features, we enable the machine learning commu- nity to explore and develop with a lightweight, private data interface to LLM-based generative AI systems, with a focus on retrieval-centric gen- eration. Potential developments that can be ex- plored using this tool include: (1) examining the effectiveness of retrieval-centric generation in de- veloping safer, more interpretable, and responsi- ble AI systems; (2) optimizing the efficiency of separating context interpretation and knowledge memorization within retrieval-centric generation approach; and (3) improving prompt-engineering techniques for retrieval-centric generation. Sim- plyRetrieve is available at https://github.com/ RCGAI/SimplyRetrieve.
2308.03983#7
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04030
7
However, given the unique requirements and customization that each specific domain demands, tools and prompting paradigms developed for a par- ticular task may prove irrelevant or ineffective for others. This poses a significant challenge to the development of a single, all-encompassing agent that performs efficiently across all tasks. Conse- quently, there is a rising need for the collabora- tion of multiple specialized agents. For example, MetaGPT (Wu, 2023) models the entire process of software development with carefully orchestrated standard operating procedures (SOPs) to generate longer program codes for game development. In our work, Gentopia provides smooth support for the composition of agents, which is handy for agent collaboration in different scenarios. # 2 Background
2308.04030#7
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
8
Our contributions can be summarized as follows: • We propose SimplyRetrieve, an innovative and user-friendly tool that leverages GUI and API platform to facilitate a Retrieval-Centric Generation approach. This platform is further strengthened by two key components: Private Knowledge Base Constructor and Retrieval Tuning Module. • We open sourced our tool to the machine learn- ing community and identify potential develop- ment directions of Retrieval-Centric Genera- tion. # 2 Related Works The emergence of Retrieval-Augmented Genera- tion architecture has spurred the development of numerous open-source tools. The ChatGPT Re- trieval Plugin1, for instance, integrates the ability to retrieve and enhance personal or organizational documents into the widely used ChatGPT model (OpenAI, 2023). Similarly, fastRAG (Izsak et al., 2023) provides a streamlined platform for con- structing efficient retrieval-augmented generation 1https://github.com/openai/ chatgpt-retrieval-plugin Retrieval Tuning Module RCG Tuning Prompt Knowledge Base a) Generative Large Language Retriever ANNS «= Knowledge Base based eS Knowledge Base «= Knowledge Base MoKB: Mixture-of-Knowledge-Base EPW: Explicit Prompt-Weighting of Knowledge Base
2308.03983#8
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
8
2 cooperation between different fields and the fu- ture prosperity of the LLM community. # 2 Related Work # 2.1 Benchmarks for Large Language Models The emergency of ChatGPT and other LLMs re- quires new benchmarks for effective evaluation. bench authors (2023) is the most accepted bench- mark to evaluate LLM’s general abilities. It con- tains more than 200 tasks, covering from child- hood development, to social bias. Zhong et al. (2023) collect test tasks from human-centric stan- dardized exams like GRE and SAT. (Hendrycks et al., 2020; Huang et al., 2023) are benchmarks focusing on measuring knowledge acquired in pre- training. They covers subjects across STEM, the humanities, the social sciences. Lin and Chen (2023) build a benchmark for LLMs’ multiturn dialogue abilities. Every dialogue is limited to two turns for simplicity. Sun et al. (2023) focus on mea- sure the safety of LLMs. They curate a adversarial attack dataset containing insulting instructions and test whether LLMs can be jailbroke. However, as mentioned above, existing datasets have issues that can not fully demonstrate abilities of LLMs. AgentSims overcomes these difficulties and renders a chance for overall evaluation of LLMs. # 2.2 Multi Agent Cooperation
2308.04026#8
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
8
# 2 Background A variety of agent projects have been proposed, targeting an array of diverse tasks, including auto- mated web navigation (Gur et al., 2023), database management (Jiang et al., 2023), automated game playing (Wang et al., 2023), collaborative soft- ware development (Wu, 2023), etc. Meanwhile, re- searchers are enthusiastically developing generalist agents that can perform well for multiple tasks. Au- toGPT (Richards, 2023) stands for the first experi- mental open-source application for fully automatic AI, with the ultimate goal of “autonomously achiev- ing whatever goal users set". SuperAGI (Kondi, 2023) provides a more user-friendly interface, im- proved memory management, optimized token us- age, and looping detection heuristics. Hugging- GPT (Shen et al., 2023) expands the potential of artificial intelligence by linking to extensive AI models hosted on HuggingFace, thereby support- ing a range of AI tasks in diverse domains and modalities, including language, vision, and speech. # 3 Design and Implementation Gentopia aims to provide easy assembly, sharing, and interaction of task-specialized agents. A sin- gle step to improve agent capability and efficiency gives plural contributions to interacted agents, fos- tering collective growth toward greater intelligence. # 3.1 Rationale
2308.04030#8
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
9
Figure 2: SimplyRetrieve is an open-source tool that provides a localized, lightweight, and user-friendly interface to the Retrieval-Centric Generation approach for the machine learning community. This tool features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. pipelines. Additionally, LangChain (Chase, 2022) offers a comprehensive generative chat AI library featuring agents, data augmentation, and mem- ory capabilities. Finally, Haystack (Pietsch et al., 2019) presents an all-encompassing NLP frame- work supporting question answering, answer gen- eration, semantic document search, and retrieval- augmentation. Both LangChain and Haystack em- ploy agent-based pipelining techniques and can process complex queries. However, this complex- ity may hinder the explainability of LLMs, mak- ing it challenging to interpret their performance in retrieval-augmented settings.
2308.03983#9
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
9
# 2.2 Multi Agent Cooperation With LLMs demonstrate their overwhelming abil- ities, researchers find that multi LLM agents can generate better results than a single one. Nair et al. (2023) is one of the earliest attempts of multi-agent cooperation. It builds a forum for agents to com- municate feedback and iteratively improve their healthcare suggestions. Li et al. (2023) expand the application field of agent cooperation method by role-playing. From programming to domain- specific QA, it surpass single agent baselines. Qian et al. (2023) build a software development com- pany, by meticulously dividing the development process into four distinct stages, leading to efficient resolution of specific subtasks. Liu et al. (2023b) first apply multi-agent simulated society for align- ment, where agents in a sandbox learn from social interaction to understand moral rules. (Park et al., 2023) is the most sophisticated application of multi agent sandbox. Authors build support mechanisms to enable agents to produce believable individual and emergent social behaviors. However, none existing methods provide a user-friendly interface Editable map + Talk to Fe
2308.04026#9
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
9
# 3.1 Rationale The impetus of Gentopia is rooted in the aspi- ration to construct capable and deployable AI as- sistants. A pertinent question that arises in this context is whether there is a necessity for a massive and expensive model like 175B GPT-4 to perform relatively simple tasks such as summarizing a web search. Recent studies like TinyStories (Eldan and Types | Logical Diagram of Agent Type Vanitla | & —~ fy —> @& ReAct | 2— BF — — » — — » — 2 Rewoo | 2— BB — — » — g OpenAl | A — S — — G —_—> —_> S — g Opendl A2— §& — — §& — — © — & Memory Oo OE n LJ LLM | Components: © Users Plugins G Function Call API Vector Database | Figure 2: Gentopia agent types, categorized according to the interaction paradigms between agents and plugins. Li, 2023), Specializing Reasoning (Fu et al., 2023), Let’s Verify Step by Step (Lightman et al., 2023), and ReWOO (Xu et al., 2023), direct our atten- tion towards an intuitive yet undervalued observa- tion – LLMs exhibit enhanced capabilities when a context/distribution shift is created, specifically narrowed towards certain target tasks.
2308.04030#9
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
10
On the other hand, our work offers a lightweight and transparent approach to implementing so- phisticated retrieval-centric, as well as retrieval- augmented architecture, while maintaining a strong emphasis on response interpretability and wider accessibility to the community. Unlike previous works such as PrivateGPT (PrivateGPT), which provides a privacy-preserving chat AI tool but lacks customization options and analytical capabilities, our tool offers a comprehensive set of features for tailoring and analyzing retrieval-centric generation. Furthermore, to the best of our knowledge, we are the first to introduce RCG concept and show initial experiments of it using our tool. # 3 Tool Design SimplyRetrieve is designed to deploy RCG pipeline: construct knowledge base, tune archi- tecture, make predictions. In this paper, we fo- cus on describing the core specifications of the tool. For details about the setup procedures, refer to the repository of https://github.com/RCGAI/ SimplyRetrieve. # 3.1 GUI and API based Retrieval-Centric Generation Platform
2308.03983#10
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
10
Editable map + Talk to Fe Figure 1: Front end of AgentSims, showing in a pixel game style. Users can create agents and buildings in the left-side panel and observe agents behaviors in the main screen. Besides setting-then-observing, users can also play as the mayor and talk with agents to intervene the experiment. for unprofessional researchers or build a standard paradigm for agent support system. Nonetheless, current multi-agent systems are task-oriented rather than evaluation-oriented. AgentSims works as a platform for easy benchmark construction. # 3 Key Components ning System reorganizes a goal by decomposing the target, summarizing current condition and gen- erating subtasks. Specifically, it is assembled by a series of pluggable prompt modules, which assess current achievement of ultimate goals by checking the memory system and making decisions for next steps. Once a new step is completed, it would be recorded in the memory system. As shown in Figure 2, key components of AgentSims can be divided into two parts: 1) genera- tive agents driven by LLM support mechanisms. 2) buidlings and equipment that consist the sandbox environment. # 3.1 Generative Agents
2308.04026#10
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
10
pletion. Besides, we build a clean and intuitive Command Line Interface (CLI) allowing users to “chat” with the agent in an interactive way. Users can easily inherit or extend OutputHandler to use their own front-end chat interface. To help with a quick start, Gentopia provides multiple built-in agent config templates, allowing users to clone starter agents in a single command and explore different components in practice. However, there is no silver bullet for agent spe- cialization. Various strategies can be employed depending on target tasks. For instance, prompting "Let’s think step by step" in context leads to more accurate math reasoning (Kojima et al., 2022). Pro- viding few-shot examples could guide an ideal exe- cution workflow. Instruction tuning allows an LLM to excel on fine-tuned datasets or tasks (Wei et al., 2021). Tweaking the agent type from ReAct (Yao et al., 2022) to ReWOO significantly reduces the execution time of observation-agnostic tasks like search & summarize. # 3.3 Adaptive Customization The agent configuration file encapsulates the criti- cal components of an agent, including:
2308.04030#10
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
11
to the repository of https://github.com/RCGAI/ SimplyRetrieve. # 3.1 GUI and API based Retrieval-Centric Generation Platform As shown in Figure 2, there are two dense models in our tool: an LLM and an Approximate Near- est Neighbor Search (ANNS) based Knowledge Retriever. The LLM can be any one of the off- the-shelf open-source LLM models available in Hugging Face (Wolf et al., 2020), ranging from 1B to more than 100B-scale in parameters such as Touvron et al. (2023a,b). The Knowledge Retriever employs a dense retriever that is compatible with various embedding models available in Hugging Face. Additionally, our tool allows integration of multiple knowledge bases simultaneously, enabling user-selectable knowledge bases depending on the specific use case.
2308.03983#11
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
11
# 3.1 Generative Agents If prompted properly, LLMs can generate believ- able behaviors(Park et al., 2022). However, to achieve human-like memory performance and long- term coherence, LLM is not enough. We need aux- iliary systems to enable agents to perform more naturally. Referring to recent work(Park et al., 2023; Wang et al., 2023a), we abstract these sup- portive mechanisms into three parts: Planning Sys- tem, Memory System, and Tool-Use System. Memory System. Agents capable of emulating human behavior necessitate comprehending a vast array of experiences, beyond what a prompt can contain. The complete memory stream is too ex- pensive to be accommodated in the limited context window, and attempting to do so can overwhelm the model. Thus, we add a memory system for agents’ experience retention and retrieval. The system is built upon a vector database for efficient storing and retrieving. Specifically, every agent’s daily mem- ory is encoded into embeddings and stored in the database. Every time when agents face some new situation that needs the previous memory, such as chatting with familiar people, the memory system can retrieve the information about their relationship to improve agent behaviour consistency.
2308.04026#11
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
11
# 3.3 Adaptive Customization The agent configuration file encapsulates the criti- cal components of an agent, including: • Basic Attributes. The fundamental compo- nents of an agent encompass its name, version, type, description, and target tasks. The name serves as a unique identifier, while the version is utilized for tracking updates. The agent’s type delineates its interaction paradigm with plugins. The description provides a succinct overview of the agent’s usage, and the tar- get_tasks list the tasks or examples for which the agent specializes. These descriptions can be selectively used in-context for agents to recognize each other upon interaction. The design of Gentopia is deeply grounded in our belief to share specialized agents for collective growth. Gentopia presents an easy and portable way to build agents, facilitating the reproduction, enhancement, and interaction of agents. A compan- ion platform, GentPool, is used to register pub- lic agents, coupling each with a descriptive Wiki page to help users navigate and search for agents in need. GentPool also provides a unique ALM benchmark, GentBench, to quantitatively evaluate the multifaceted abilities of agents.
2308.04030#11
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
12
In terms of the GUI, we have designed a sim- ple yet intuitive layout using Gradio (Abid et al., 2019), which provides a familiar streaming chatbot interface with user control for managing the run- ning modes of the retriever, engineering prompts, and configuring the tool. As depicted in Figure 3, our GUI features a comprehensive retrieval-centric tuning panel for functions including manual knowl- edge base selection from multiple sources and Mixture-of-Knowledge-Base modes. Moreover, we employ Explicit Prompt-Weighting of retrieval to adjust the level of influence exerted by the retriever. To ensure seamless integration, we also developed a comprehensive API access function using the Gradio Client Interface, and we allow multi-user Chatbot Functional Tabs | Whatis the purpose of establishing KIOXIA lwate Corporation? Streaming Chatbot Interface The purpose of establishing KIOXIA Iwate Corporation is to meet the growing demand for flash memory through advanced manufacturing processes utilizing Al Retrieval-Centric Tuning Panel Use KnowledgeBase KnowledgeBase Mode Selectable KnowledgeBase KnowledgeBase Kioxia Expert Prompt Weighting KnowledgeBase Weightag
2308.03983#12
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
12
Planning System LLMs have shown some plan- ning and reasoning capacities. However, faced with complex tasks, vanilla LLMs always fail for lacking long-term arrangement abilities. Hence, we introduce a Planning System to ensure agents’ behaviors are coherent and believable. The PlanTool-Use System. Ideally, agents continuously explore the simulated world would learn from pre- vious failures and successes, then acquire diverse skills. In our framework, to realize this feature, we present a tool-use system, which endows agents 3 Environment Interaction Agents chat A Agent Bio Goal feedback Y 7 operation Equipments ! sation) Fiession | Memory System Support Systems = Planning System © Tool-Use System a Figure 2: Overview of AgentSims architecture with the ability to accomplish real-world tasks. Par- ticularly, the tool use system stores equipment- operation pairs learning from feedback of using equipment. Once agents select equipment to in- teract with by planning and memory system, they need to infer an initial operation by the description of the equipment. And the equipment will return an operation result as feeedback. If the agent believes the result meets their operation purpose, a new skill would be stored in the Tool-Use System. # 3.2 Buildings and Equipment feedback and refine its operations. # Interaction scenarios
2308.04026#12
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
12
• LLM is a pivotal component that drives the agent’s behavior. It is typically a dictionary of the model_name and parameters. Gentopia supports a variety of OpenAI LLMs and over 10 kinds of HuggingFace open-source LLMs (including Llama (Touvron et al., 2023), Al- paca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Falcon(Almazrouei et al., 2023), Flan (Wei et al., 2021), MPT (MosaicML NLP Team, 2023), and more), each with a unique set of tunable parameters and usage costs. No- tably, Gentopia unifies support in both CPU # 3.2 Assembling Agents
2308.04030#12
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
13
Figure 3: The GUI design of SimplyRetrieve features four primary tabs. The Chat tab serves as the central query and response interface with retrieval-centric tuning panel. The Prompt tab provides an intuitive editor for modifying, updating, and saving prompts used by the AI. The Config tab enables users to customize various tool settings and save their preferences. Finally, the Analysis tab offers a comprehensive analytics platform for analyzing and logging data related to SimplyRetrieve’s performance and usage. concurrent access to both UIs, leveraging Gradio’s queue functionality to manage requests efficiently. The retrieval-centric tuning panel enables lightweight and simplistic access to RCG. By using the manual knowledge base selection mode, users can construct and import multiple private knowl- edge bases simultaneously into this tool. The abil- ity to select the most relevant knowledge base for each task allows users to maintain control over the selection process while avoiding any unexpected outcomes. Our MoKB mode enables automatic se- lection of the most suitable knowledge base based on the similarity between the query and knowledge base functional descriptions. We use semantic co- sine similarity of embedding space to calculate these scores, providing an efficient and lightweight approach to knowledge base auto-selection. By updating the functional descriptions in the configu- ration file, users can further enhance the accuracy of the selection algorithm. and leave it for future work. # 3.2 Private Knowledge Base Constructor
2308.03983#13
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
13
# 3.2 Buildings and Equipment feedback and refine its operations. # Interaction scenarios Regarding the researchers’ backgrounds and pur- poses, we design two interaction modes: User Mode and Developer Mode. In the User Mode, re- searchers who consider little about background sup- port systems are target users. For researchers chas- ing better LLMs performance, Developer Mode provides flexible protocols for their development of different support mechanisms.
2308.04026#13
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
13
At its core, Gentopia embodies each customized agent as a single yaml config file, which can be sent to AgentAssembler to create a corresponding agent instance. An agent instance acts similarly to a language model, where users can use “run” or “stream” to get completed or incremental comTasks Sub-tasks Description Data Source Examples Reasoning Math measures agent ability to solve a wide range of math problems. MATH (Hendrycks 2021b), et al., GSM8K (Cobbe et al., 2021) Coding measures agent ability to write code to fulfill requirements and pass tests. (Chen HumanEval 2021), et (Austin MBPP et 2021), APPS (Hendrycks et al., 2021a) al., al., Planning measures agent reasoning to complete a task in correct order. LLM- Plan et al., 2023) (Valmeekam Commonsense measures agent ability in reason- ing for everyday questions. BBH (Suzgun et al., 2022) Knowledge World knowledge measures agent ability in an- swering a wide range of factual questions. MMLU (Hendrycks et al., 2020) Domain-specific knowledge measures agent with domain- specific knowledge. AGIEval et al., 2023) (Zhong
2308.04030#13
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
14
and leave it for future work. # 3.2 Private Knowledge Base Constructor Our Retrieval-Centric Generation Platform is as- sisted by a Private Knowledge Base Constructor that creates a local and personalized knowledge base using the user’s documents. This construc- tor employs a scalable documents loader that can handle large volumes of documents by chunking and streaming the loading, splitting and knowledge base creation processes, allowing for efficient doc- ument processing. The constructor supports var- ious document formats such as PDF, TXT, DOC, DOCX, PPT, PPTX, HTML, MD, CSV, among others, and can be easily expanded by editing con- figuration file. Additionally, the length of passages in the documents splitting function is easily config- urable to meet specific requirements. Additionally, our Explicit Prompt-Weighting fea- ture allows manual adjustment of the degree of influence of retrievers on the language model, en- abling customized control over the balance between retriever and LLM. Through prompt-engineering or token weight adjustment, users can adapt the tool to their specific needs, ensuring optimal performance. SimplyRetrieve has incorporated Explicit Prompt- Weighting through prompt-engineering, where the weightage can be adjusted to fine-tune the percent- age of knowledge tokens to be used in the prompt out of retrieved tokens. However, we have not im- plemented token weight adjustment in this study
2308.03983#14
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
14
Interactive buildings and equipment are necessities for the diversity of an LLM sandbox. They com- pose the physical environments of the simulated world. In our framework, a building or location con- tains equipment like stoves or office desks. Thus, buildings are defined by the equipment they con- tain and equipment is the basic element composing the interactive environment. More specifically, the equipment can be defined by some definition texts describing its features and support function, which can be either hard-coded by the developer or a language model that supports self-adaptive agent- equipment interaction. When an agent interacts with equipment, as shown in Figure 2, its operation text will be sent to the background support model. The support function then returns the operation outcome based on the predefined rules or model- generated texts. For example, if an agent wants to get a cup of tea from a stove, the operation is ’Get a cup of tea’ and the support function may return ’Meaningless operation’ according to the hard code or ’You can not get tea from a stove’ generated by the model. Then the agent would learn from the # 4.1 User Mode In the User Mode, AgentSims provides an inter- active interface in a pixel game style, as shown in Figure 1. Researchers can create agents, construct buildings and equipment in a graphical interface, fo- cusing on the rationality of experiment design, free from complex background driving mechanisms.
2308.04026#14
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
14
MMLU (Hendrycks et al., 2020) Domain-specific knowledge measures agent with domain- specific knowledge. AGIEval et al., 2023) (Zhong Web-retrieval measures how capable an agent could answer to surf online and retrieve real-time information. Curated Safety Integrity measures agent ability to avoid generating unsafe or offensive content, even when prompted in crafty ways (eg. jailbreaking). Curated Harmlessness measures agent bias in gender, ethics, age, etc. BBQ et Bold et al., 2021) (Parrish 2021), (Dhamala al., Multilingual Translation asks agent to correctly translate among different languages. Curated Understanding similarly tests an agent if it un- derstands something in different languages. Curated Token usage
2308.04030#14
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
15
After generating the sources for the knowledge base, we use a dense encoder to convert the text into numerical embeddings that can be used for semantic search and retrieve. To accommodate large-scale knowledge bases, we utilize ANNS for efficient semantic retrieval. By default, our tool employs the Hierarchical Navigable Small Worlds (HNSW) (Malkov and Yashunin, 2020) algorithm, but we also provide support for flat indexing and the IVFPQ-HNSW method, which combines in- verted file indexing with product quantization and HNSW course quantizers. The Index Constructor function automatically creates the required index files for semantic searching. We implement our indexing function by using Faiss library (Johnson et al., 2019). # 3.3 Retrieval Tuning Module
2308.03983#15
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
15
Agent Creation. Users can define agents within the system through an easy-to-use front end, as shown in the Figure 3. AgentSims provides various protocols for users to create functional agents. Not only basic information like goals and biography, but also options of Memory and Planning Systems. We pre-design a list of memory and planning sys- tems and users can choose their preference from a drop-down menu. Building Creation. Users can also customize the physical environment by constructing buildings. As shown in Figure 4, users define a building by choosing a pre-configured building with equipment inside. To be noticed, the equipment in buildings are predefined but can be modified in the Developer 4 Plan system OF Goal um cera Memory system Total payment Figure 3: Agent Creation HouseZ Total payment Confirm Figure 4: Building Creation # Mode. Experiment Intervene. Besides observing, users can play as the major agent to participate in the experiment. By talking with other agents, users can intervene the experiment naturally rather than modify agents’ memory or goals roughly. # 4.2 Developer Mode Developer Mode is designed for professional devel- opers who are familiar with the properties of LLMs and pursue better performance of LLMs on a well- defined complex task. The highly-modularized fea- ture of AgentSims enables developers to add new functions within a few lines of code.
2308.04026#15
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
15
# Efficiency # Run time These metrics indicate how expensive or time-consuming for agents to execute on average and on different tasks. Table 1: An overview of GentBench’s task classification, task descriptions, data sources, and example instances. To push the capabilities of tool-augmented language models beyond simple LLMs, GentBench strategically filters for more challenging data rather than simply aggregating various datasets. and GPU loading, together with 8-bit and 4- bit weight Quantization, thereby adapting to a wide range of computation environments. • Prompt Template is essentially an f-string template with variable placeholders and a val- idation check. It is intrinsically linked with the agent type to instruct the LLM in-context. Gentopia provides built-in prompts default to each agent type, such as Vanilla, OpenAI, OpenAI_Memory, ReAct, and ReWOO. Plugins enable agents to interact with external tools or other agents, thereby extending their capabilities beyond single language mod- els. Gentopia also allows agents to be built in a hierarchical architecture, such that those closer to the leaves are supposed to be increas- ingly specialized and narrowed to more gran- ular sub-tasks.
2308.04030#15
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
16
The Retrieval Tuning Module of our tool includes three key functionalities: prompt-engineering, tool configuration, and analysis and data logging. The prompt-engineering functionality allows users to easily edit, update, and save retrieval-related prompts using a user-friendly Prompt Tab within our GUI. Available prompts are AI Prefix, Retriever Prefix, Retriever Suffix, Model Prefix and Model Suffix. The configuration functionality enables users to modify and save all configurable settings via the Config Tab within our GUI. Finally, the anal- ysis and data logging functionality collects and dis- plays retrieval-related analysis data, including re- trieved knowledge base, query, response, sentence- level and token-level similarity scores, in the Anal- ysis Tab of our GUI. Similarity scores are calcu- lated based on both semantic cosine similarity of sentence-to-sentence embeddings and all-token-to- token embeddings. This approach allows us to capture both local and global similarities between sentences, leading to more accurate assessments of their comparability. Additionally, users can save all logged data to a log file for further analysis. GUI designs are depicted in Figure 4, 5 and 6 of Ap- pendix A.2. To deploy an end-user mode, users can simply disable the update functions in the Retrieval Tuning Module through command-line options. # 4 Evaluations
2308.03983#16
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
16
Agent Design. Developers have the flexibility to create agents tailored for various objectives and assemble diverse agents within a single sandbox for observation. To streamline the process of agent customization, we’ve abstracted the LLM back- bone and distinct support systems into separate classes and function calls, as illustrated below. This empowers developers to personalize an agent by making adjustments to these abstract functions. 5
2308.04026#16
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
16
• Memory allows LLMs to retrieve informa- tion out-of-context. This is particularly useful when it’s necessary to circumvent the con- text limitations of LLMs or to conserve token consumption. Implementation details are described in the appendix. # 3.4 Agent Evaluation Benchmark GentBench is a unique benchmark for agents or ALMs. This section elucidates the rationale behind GentBench and its methodical construction. # 3.4.1 Objectives Due to the massive need of training datasets, re- searchers and organizations tend to use public NLP benchmarks, such as MMLU (Hendrycks et al., 2020), MATH (Hendrycks et al., 2021b), Big- Bench (bench authors, 2023) to enrich the LLM training corpus. Such methods inevitably introduce evaluation bias when the entailed agents are tested against the same set of tasks at inference. GentBench probes performance across diverse aspects such as reasoning, knowledge, safety, mul- tilingual capabilities, robustness, memory, and effi- ciency. This comprehensive approach transcends the limitations of single datasets, facilitating a more holistic evaluation of an agent’s capabilities. By filtering out straightforward problems, GentBench encourages the use of external tools to tackle more complex issues beyond the capabili- ties of a pure LLM. Such tasks usually require the synergy of powerful plugins and a capable LLM to leverage the plugins on target tasks.
2308.04030#16
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
17
# 4 Evaluations In this section, we perform several qualitative eval- uations to demonstrate the usability and behavior of our tool. We construct our knowledge base using the most recent information available on the web- site of an organization2. We utilize the models pub- licly available on Hugging Face, Wizard-Vicuna- 13B3 (Xu et al., 2023; Chiang et al., 2023) as the LLM and Multilingual-E5-base4 (Wang et al., 2022) as the encoder for our evaluations, unless specified otherwise. We load both models into a single Nvidia A100 GPU in 8-bit INT8 mode for lower memory usage and higher inference speed. We set temperature of LLM to 0. We utilize HNSW for indexing of knowledge bases and set the num- ber of passages retrieved to 5. 2https://www.kioxia.com/en-jp/top.html 3https://huggingface.co/ehartford/ Wizard-Vicuna-13B-Uncensored 4https://huggingface.co/intfloat/ multilingual-e5-base # 4.1 Qualitative Evaluation
2308.03983#17
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
17
5 class LLMCaller: def __init__(self, model: str) -> None: self.model = get_model(model) def ask(self, prompt: str) : result = self.model.generate(prompt) return result class Agent: def __init__(self, name, bio, goal, model, memorySystem, planSystem, buildings, cash): self.state = State() self.state.buildings = buildings self.state.cash = cash self.caller = Caller(model) def plan(self) -> None: self.state.plan_prompt = ... self.state.plan = self.caller.ask(self.state.pl_prompt) def memory_store(self) -> None: self.state.memory_prompt = ... self.state.memory = self.caller.ask(self.state.mem_prompt) def use(self, facility: str, operation: str, description: str) -> None: self.state.use_prompt = ... self.state.use = self.caller.ask(self.state.use_prompt) Building and Equipment Design. To customize the physical environment, developers can design new buildings and equipment by configuring corre- sponding json files. A new equipment can be defined by its type, de- scription and a support function.
2308.04026#17
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
17
3.4.2 Benchmark Construction The construction of GentBench involves an exten- sive collection and curation of tasks, and a meticu- lous process to filter out less challenging problems. The gpt-3.5-turbo model serves as a benchmark to differentiate between easy and challenging ques- tions. Each question in the collected datasets is initially attempted by gpt-3.5-turbo. Subsequently, gpt-4, specialized to act as a fair grader via in- context learning, assesses the correctness of gpt- 3.5-turbo’s answer. This rigorous evaluation results in a refined dataset composed solely of the chal- lenging problems where gpt-3.5-turbo fails to solve independently. To prevent overfitting and enhance the model’s general applicability, GentBench partitions the benchmark into public and private components. The public component fosters model development with open access, while the private component is exclusively for agents to be merged into GentPool, testing the generalized abilities of the agent on unseen tasks. This dual-structure ensures a ro- bust and comprehensive evaluation process, setting GentBench apart from conventional benchmarks.
2308.04030#17
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
18
# 4.1 Qualitative Evaluation We compare the results of three approaches: Retrieval-Centric Generation (RCG), Retrieval- Augmented Generation (RAG), and Retrieval-OFF Generation (ROG). Note that in this work, we de- fine RAG as allowing more permissible integra- tion of LLM’s inherent and externally retrieved knowledge, whereas RCG prioritizes clear demar- cations between context interpretation and knowl- edge memorization. Investigating advanced meth- ods in extracting RCG behavior is a promising research topic. In this work, we conduct simple experiments using prompt-engineering technique to reveal the potential of RCG approach. Specifi- cally, for RCG, we employ a retrieval suffix prompt that reads “answer the following question with the provided knowledge.” For RAG, we use a less constraining prompt that reads “answer the follow- ing question. You may use the provided knowl- edge.” Finally, for ROG, which is the baseline in our evaluation, we disable the retriever altogether. By using the prompt-engineering function in Sim- plyRetrieve, we can easily perform these experi- ments. Queries are human annotated. In Table 1 and 2, blue-colored texts indicates factually correct responses while red-colored texts indicates halluci- nations.
2308.03983#18
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
18
[{"id": 1, "type": "counter", "function":..., "description": "This is the counter ...",}] In some cases, agents can purchase commodities or earn salaries at the equipment. We use another configure file to annotate these economic features. [{ "id": 1, "menu": { "chicken": 20,}, "salary":0,}], We define buildings by a type and the equipment it contains. Hence we use a two-dimensional array to mark the facility ids in the building blocks. [{"assets": "store_v1.2_0719", "id": 1, "price": 2000, "type": "store", "blocks":[[1,0,0...1,1]], "equipment":[0,1,0..]]}] # Implementation AgentSims is run using Python 3.91 and requires installing the requirements.txt file provided in the codebase using Python’s package manager PyPI2. # 5.1 Backend is built using Tornado3, a The web server lightweight Python web framework. It also uses the websockets library for API calls and push noti- fications, and mysql-connector-python to interact with the MySQL4 database.
2308.04026#18
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
18
GentBench apart from conventional benchmarks. 3.4.3 EvalPipeline GentBench employs a range of specialized agents, known as "graders", each designed to cater to dif- ferent evaluation needs, including binary outcomes (GatedGrader), continuous scoring (ScoreGrader), pairwise outcomes (DojoGrader), custom measure- ments (InstructedGrader), and unit test execution (CodeGrader). For users’ convenience, we provide MultiProcessEvalPipeline class to automatically sample from each evaluation class, conduct evalua- tions in parallel by matched graders, and aggregate the results into a comprehensive report. We also integrate our evaluation results with Zeno (Cabrera et al., 2023), a powerful visualization tool assist- ing users in collecting nuanced insight into the strengths and weaknesses of agents. # 3.5 Collective Contribution As an open-source project, Gentopia actively encourages users to contribute their specialized agents to GentPool. Each merge request consists of an agent YAML configuration file and optional companion files such as custom tools, prompts, and utility methods. Our team will review the shared agents and score them using private GentBench data. Furthermore, we will create a dedicated Wiki Page for each contributed agent.
2308.04030#18
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
19
ApproachResponse RCG Kioxia has factories located in both Yokkaichi city, Mie prefecture and Kitakami city, Iwate pre- fecture in Japan. Kioxia has factories located in various regions across the globe including Japan, Thailand, Malaysia, China, and the United States. However, its main production site is located in Yokkaichi City, Mie Prefecture, Japan. Our main production site for HDD and SSD prod- ucts is located in Yamagata, Japan. We also have manufacturing sites in Thailand and the Philip- pines that produce optical disc media such as Blu- ray Discs and DVDs. RAG ROG Table 1: Qualitative comparison between Retrieval- Centric, Retrieval-Augmented and Retrieval-OFF ap- proaches using prompt-engineering in SimplyRetrieve.
2308.03983#19
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
19
5.2 Frontend Frontend The web client is built with Unity5. The client built by WebGL6 is embedded in the project code and can be accessed through a browser after proxying with nginx7. # 6 Example Application Tasks # 6.1 Subject LLM as participants When subject LLM agents are participants of an artificial scenario, researchers can evaluate LLM’s social abilities, like ToM . In this case, the formu- lation of specific social scenes is realized by other baseline agents driven by stronger LLMs. For ex- ample, to study a new model’s social adaptation abilities in a hostile environment, we can embed colleague agents driven by GPT4 with a strong de- sire of bullying newcomers. Then we place subject agents into this adversarial milieu and test whether the new model can understand other’s emotion and improve how colleagues perceive it. # 6.2 Subject LLM as mayor To assess LLM’s long-term planning and organiza- tion abilities, researchers can appoint the subject LLM as the mayor of a town or the president of a company, where residents or employees are driven by baseline agents like GPT4. To overcome the difficulties set ahead deliberately or emerging dur- ing the experiments, then achieve the final goal of the task, the subject LLM needs to recruit new resi- dents to handle new problems, issue sound policies
2308.04026#19
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
19
Once the agents are incorporated into Gentopia, users can utilize built-in commands to clone or call it for downstream use cases, fostering a dynamic and collaborative environment. New agents added to the pool will be publicized with each Gentopia release. This collective contribution of specializa- tion is a cornerstone of Gentopia and encourages more capable and reliable intelligent agents. # 4 Case Study We briefly showcase the process of building an agent, who acts as an experienced and visionary entrepreneur, for the users to create business plans with the help of Gentopia. Further, the users can evaluate the created agent and share it publicly into the GentPool. # 4.1 Initializing an Agent Figure 3 illustrates a concrete workflow for work- ing with agents in GentPool. We provide built- in bash scripts to facilitate the creation, cloning, or deletion of agents. GentPool registers tem- plate agents for each built-in agent type, allowing
2308.04030#19
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
20
Table 1: Qualitative comparison between Retrieval- Centric, Retrieval-Augmented and Retrieval-OFF ap- proaches using prompt-engineering in SimplyRetrieve. Based on the results in Table 1, we observed that RCG successfully answered our query accu- rately, whereas RAG attempted to integrate the retrieved knowledge base with LLM’s own knowl- edge of the specific organization, resulting in par- tially erroneous information. Meanwhile, ROG re- sponded to our query with LLM’s preexisting mis- taken knowledge, generating falsehoods. Our findings suggest that crafting clear and direct prompts, such as “answer the given question using the pro- vided knowledge”, can encourage retrieval-centric behavior from the LLM, although further refined prompt-engineering may be necessary for optimal performance. As shown in Table 2, RCG effectively addressed the query and produced accurate results. When combining RCG with a 50% Explicit Prompt- Weighting (RCG-EPW), where only the first 50% of retrieved knowledge base are injected into the prompt, the model generated partial responses but still maintained factual accuracy. In contrast, the pure LLM without RC enabled (ROG) provided hallucinatory responses, highlighting the impor- tance of incorporating retrieval-centric techniques in language models for improved performance.
2308.03983#20
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
20
1https://www.python.org/downloads/release/ python-390 2https://pypi.org/ 3https://www.tornadoweb.org/en/stable/ 4https://www.mysql.com/ 5https://unity3d.com 6https://get.webgl.org 7https://nginx.org/en/ 6 and modify the out-of-date ones, found new func- tional buildings to satisfy emerging requirements, and so on. By analyzing the success rate of LLM mayor under different difficulties, researchers can gain valuable insights into the diverse capabilities of the LLM. # 6.3 Applications besides Evaluation
2308.04026#20
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
20
Config and assemble the agent (B) ot . Human-Agent Interaction © business plans and strategies. Create new agent $ ./create_agent my_agent Clone from pooled agents $ ./clone_agent react_template my_agent = Hnelude .-/dr_science/agent. yam. Delete agent 1 lon is an experienced and visionary entrepreneur ‘ange a bunch of tools and agents to collaborate on his ideas. te: prompt gentpool.pool.elon.pronpt.PromptOfelon YAML -_ >. 9 python assemble.py my_agent « CLi-based : Interaction -———6enttopia- Welcome to Gentopia! 6 [=== | id : : 3 \ GUbbased Interaction $ /delete_agent my_agent BSI S it push origin my_agent_branch Config components Custom Operators 44 Share the agent in GentPool © BRS prion evatsate py my_agent GentPool iy Moe ma J a . 2 a am ae Figure 3: A representative workflow using Gentopia.AI with GentPool. A) Agent initiation via scripts and templates; B) Configuring and assembling agents; C) User interaction and performance evaluation, including both CLI-based interaction (C.1) and GUI-based interaction (C.2); D) Sharing specialized agents in the GentPool. # 4.4 Agent Specialization and Publication
2308.04030#20
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
21
Approach RCG Response Kioxia developed a high capacity storage device which could store large amounts of image data required by AI algorithms. This allowed them to deploy memory centric AI systems capable of handling multiple classi- fication tasks simultaneously without com- promising performance or accuracy. RCG-EPW Deployment of Memory-centric AI with ROG high capacity storage The company developed a new image clas- sification algorithm that achieved top-1 ac- curacy of 82.7% on the ImageNet valida- tion set, which is an improvement over pre- vious state-of-the-art results by more than one percentage point. This was made possi- ble through advancements in hardware and software technologies such as Tensor Core GPUs and our own deep learning frame- work, ONNX-MXNET. Table 2: Effects of Retrieval-Centric Generation in Sim- plyRetrieve, based on the knowledge base about an or- ganization. # 4.2 Accuracy & Speed Evaluations
2308.03983#21
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
21
# 6.3 Applications besides Evaluation Besides evaluating LLMs, AgentSims can be used as a data generation platform. Due to the fantastic NLG abilities of LLMs, researchers have applied them in data annotation and augmentation. How- ever, some data involving social judgement and participation necessitate a more intricate approach than a single prompt can provide. Thus, we can simulate a specific social background and let LLMs generate data more precisely. Liu et al. (2023b) have applied simulated society in alignment data generation. With AgentSims tailored for more intri- cate social simulations, its potential for enhancing data generation across various disciplines is unde- niable. Moreover, our program can also benefit social sci- ence researchers, by conducting more controllable preliminary experiments. Given that sota LLMs can understand human instructions and simulate human behaviours, social science researchers can design social environments as they wish for prelim- inary studies. Once researchers have a hypothesis, pilot experiments can be conducted in our virtual sandbox as a feasibility check. # 7 Conclusion In this paper, we present AgentSims, avisualized and program-based infrastructure for LLM test sandbox construction. AgentSims aims to facil- itate researchers in effectively building LLM evalu- ation tasks. It not only intends to make all its code openly available but also commits to continuously updating its documentation with comprehensive tutorials. # Limitations
2308.04026#21
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
21
# 4.4 Agent Specialization and Publication users to clone, for instance, the "react_template" to start off. An agent instance simply contains an "agent.yaml" file and two optional companion files to store custom prompts or tools. Users can employ various methods in agent spe- cialization, improving agent performance and ef- ficiency. These approaches include in-context prompt tunings like using few-shot examples, fine- tuning a specialized LLM on desired tasks or datasets, optimizing component configs such as trying new agent types and other sub-agents, and improving the capabilities of tools. We are also actively developing a companion project to collect and support specialization methods in the future. # 4.2 Custom Configuration
2308.04030#21
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
22
# 4.2 Accuracy & Speed Evaluations In addition to evaluating the effectiveness of RCG using human annotations, we also conduct an in- ternal evaluation of our tool’s performance using a self-generated dataset. To create this dataset, we pass relevant passages through the language model Llama-2-13B-chat (Touvron et al., 2023b) to gen- erate 10 query and label pairs. For details on how we generated this dataset, refer to Appendix A.4. We employ Rouge-L score (Lin, 2004) as our per- formance metric. We perform this evaluation by using the API function of SimplyRetrieve. Our results in Table 3 show that RCG significantly improves the Rouge-L score compared to the baseline approach of ROG, while also slightly more com- petitive than RAG. Moreover, despite the fact that RCG processes longer prompts than ROG due to the addition of knowledge tokens, we observe a decrease in processing time owing to the increased precision and brevity of the generated responses. Specifically, number of response tokens generated in RCG are in average 36% less than those gen- erated in ROG. This efficient performance may facilitate broader adoption within the community, as users can expect quicker response generation without sacrificing accuracy.
2308.03983#22
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
22
# Limitations As a sandbox system, AgentSims’ simulation abil- ity is limited by the accuracy of LLMs and the diversity of buildings and equipment. It can never fully reflect real world cases. Besides, although task-based evaluation is a sound approach to mea- sure the general ability of LLMs, it can hardly re- flect fine-grained abilities like math reasoning. The pass rate of tasks can not provide insights on why LLMs success or fail. # References BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
2308.04026#22
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
22
# 4.2 Custom Configuration Users can configure essential components of the agent such as name, description, target_task, plu- gins, etc. For instance, shown in Figure 3, users can use the prompt template of ‘PromptOfElon’ and GPT-4 for constructing the LLM component. They can also add plugins (e.g., ‘google_search’ and ‘web_page’) to boost the agent. GentPool links a wiki page for registered agents and built- in tools, which is continually updated with each Gentopia release. Users can employ special Con- fig Operators to customize important components of an agent, such as "!prompt" for customizing prompt_template, "!tool" for self-defined tools as plugins, "!include" for sub-agents as plugins, "!file" to read local files in text format, and "!env" to read an environmental variable. Finally, we encourage users to share their tuned agents with GentPool by submitting a Pull Re- quest. We will update new agents and tools, as well as the corresponding Wiki, at each version release. # 5 Conclusion
2308.04030#22
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
23
Approach ROG RAG RCG Rouge-L Score 0.186 0.359 0.413 time/query(s) 17.22 18.41 11.67 Table 3: Response accuracy & speed evaluation of Sim- plyRetrieve. Finally, our findings suggest that even a mod- estly sized LLM of 13B parameters can demon- strate satisfactory performance in RCG approach towards never-seen-before factual knowledge with- out any model fine-tuning, potentially facilitates the deployment of Generative AI systems in real- world scenarios. See Appendix A.2 for further discussions and A.5 for ablation studies. # 5 Conclusion We introduced SimplyRetrieve, an open-source tool that aims to provide a localizable, lightweight, and user-friendly GUI and API platform for a Retrieval- Centric Generation approach based on LLMs. Our tool enables developers and end-users to easily in- teract and develop with a privacy-preserving and lo- cally implemented LLM-based RCG system, which we believe will contribute to the democratization of these technologies within the machine learning community. Increased clarity in role-separation be- tween context interpretation and knowledge memo- rization can potentially boost the performance and interpretability of generative AI systems, facilitat- ing deployments. # Limitations
2308.03983#23
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
23
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322.
2308.04026#23
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
23
# 5 Conclusion This paper introduces Gentopia.AI, an open- source platform designed for tool-augmented LLMs. Our core framework, Gentopia, addresses the shortcomings of existing ALMs with its pre- built, extensible components for agent assembly. Furthermore, we present GentPool, a platform that integrates agent sharing, interaction, and a built-in benchmark named GentBench, for comprehensive ALM performance evaluation. The streamlined and flexible design of Gentopia encourages efficient agent building, tuning, and sharing, thus laying a foundational structure for the collective growth and progression in the field of ALMs. # 4.3 Testing and Evaluation There are two methods to assess the performance of a new agent: qualitative human evaluation and quantitative GentBench evaluation. Users can call "assemble.py" to initiate a CLI chat interface and converse with the target agent. Alternatively, users can use "evaluate.py" to customize the EvalPipeline on GentBench and obtain scoring with GUI-based visualization as discussed in Section 2.4.3. # Acknowledgement Gratefully, we thank Dr. Graham Neubig and the Zeno team for advising and integrating with us on agent evaluations and visualizations.
2308.04030#23
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
24
# Limitations It is important to note that this tool does not provide a foolproof solution for ensuring a completely safe and responsible response from generative AI mod- els, even within a retrieval-centric approach. The development of safer, interpretable, and responsi- ble AI systems remains an active area of research and ongoing effort. Generated texts from this tool may exhibit varia- tions, even when only slightly modifying prompts or queries, due to the next token prediction behav- ior of current-generation LLMs. This means users may need to carefully fine-tune both the prompts and queries to obtain optimal responses. # References Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Zou. 2019. Gradio: Hassle-free sharing and testing of ml models in the wild. arXiv preprint arXiv:1906.02569.
2308.03983#24
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]
2308.04026
24
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023. Camel: Communicative agents for "mind" explo- ration of large scale language model society. Yen-Ting Lin and Yun-Nung Chen. 2023. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language mod- els. Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. 2023a. Evaluating the logical reasoning ability of chatgpt and gpt-4. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, and Soroush Vosoughi. 2023b. Training socially aligned language models in simulated human society. Varun Nair, Elliot Schumacher, Geoffrey Tso, and Anitha Kannan. 2023. Dera: Enhancing large lan- guage model completions with dialog-enabled resolv- ing agents.
2308.04026#24
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open question. Existing evaluation methods suffer from following shortcomings: (1) constrained evaluation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task-based evaluation, where LLM agents complete tasks in a simulated environment, is a one-for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastructure for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mechanisms, i.e. memory, planning and tool-use systems, by a few lines of codes. Our demo is available at https://agentsims.com .
http://arxiv.org/pdf/2308.04026
Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen
cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary) 14F05, 14J26 (Secondary) 68T42
submit to EMNLP2023 demo track
null
cs.AI
20230808
20230808
[ { "id": "2009.03300" }, { "id": "2306.11644" }, { "id": "2305.08322" }, { "id": "2303.17580" } ]
2308.04030
24
# Acknowledgement Gratefully, we thank Dr. Graham Neubig and the Zeno team for advising and integrating with us on agent evaluations and visualizations. Gentopia.AI is a new open-source community and expanding features in the long term. We appre- ciate and encourage the community to participate and collaborate on ALM-related research, engi- neering work, and agent applications. Please get in touch with us for future opportunities. # Ethics Statement In developing our framework Gentopia, we ad- hered to rigorous ethical principles to ensure the responsible use and deployment of ALMs. We tried to make it as transparent as possible so that users can more reliably use it. Also, the data sources used in GentBench are collected from publicly available datasets, and no demographic or confidential infor- mation from users is accessed, safeguarding their privacy and anonymity. Furthermore, the availability of multiple agents and datasets in Gentopia is intended to facilitate diverse and unbiased research while ensuring that developers can easily customize and share their agents responsibly. Researchers and developers can explore the full potential of ALMs while safe- guarding the interests of all stakeholders involved. # References
2308.04030#24
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, gentbench, an integral component of gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release gentopia on Github and will continuously move forward.
http://arxiv.org/pdf/2308.04030
Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
cs.AI
null
null
cs.AI
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2305.20050" }, { "id": "2305.07759" }, { "id": "2304.06364" }, { "id": "2107.03374" }, { "id": "2108.07732" }, { "id": "2305.09645" }, { "id": "2110.14168" }, { "id": "2210.09261" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2210.03629" }, { "id": "2303.05398" }, { "id": "2301.12726" }, { "id": "2302.06706" }, { "id": "2303.17580" }, { "id": "2110.08193" }, { "id": "2109.01652" }, { "id": "2306.08302" } ]
2308.03983
25
Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Mag- giore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2206–2240. PMLR.
2308.03983#25
SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years. Integrating a knowledge retrieval architecture allows for seamless integration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direction that explicitly separates roles of LLMs and retrievers in context interpretation and knowledge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly interface to these sophisticated advancements to the machine learning community. SimplyRetrieve features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can explore the potential of RCG for improving generative AI performance while maintaining privacy standards. The tool is available at https://github.com/RCGAI/SimplyRetrieve with an MIT license.
http://arxiv.org/pdf/2308.03983
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
cs.CL, cs.AI
12 pages, 6 figures
null
cs.CL
20230808
20230808
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.14314" }, { "id": "2307.09288" }, { "id": "2212.03533" }, { "id": "1906.02569" }, { "id": "2304.12244" } ]