id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.05696#43
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. [49] Yikai Zhou, Baosong Yang, Derek F Wong, Yu Wan, and Lidia S Chao. Uncertainty-aware curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the association for computational linguistics, pages 6934â 6944, 2020. [50] Qingqing Zhu, Xiuying Chen, Pengfei Wu, JunFei Liu, and Dongyan Zhao.
2308.05696#42
2308.05696#44
2308.05696
[ "2307.12966" ]
2308.05696#44
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Combining curriculum learning and knowledge distillation for dialogue generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1284â 1295, 2021. 12
2308.05696#43
2308.05696
[ "2307.12966" ]
2308.05481#0
LLM As DBA
3 2 0 2 g u A 1 1 ] B D . s c [ 2 v 1 8 4 5 0 . 8 0 3 2 : v i X r a Xuanhe Zhou Tsinghua University Beijing, China [email protected] # LLM As DBA Guoliang Li Tsinghua University Beijing, China [email protected] Zhiyuan Liu Tsinghua University Beijing, China [email protected] # ABSTRACT
2308.05481#1
2308.05481
[ "2307.07924" ]
2308.05481#1
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data avail- ability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., mil- lions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can con- tinuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (ð ) database maintenance knowledge detection from doc- uments and tools, (ð ð ) tree of thought reasoning for root cause analysis, and (ð ð ð
2308.05481#0
2308.05481#2
2308.05481
[ "2307.07924" ]
2308.05481#2
LLM As DBA
) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT. H Thought: High memory usage seems to be caused by 1 poor join performance and much inactive memory ' Reasoning: Poor joins can be solved by plan optimization \ Action: optimize_query_plan <> Documents 7 5 5 S Query -=, Query f=) Query | sh Rewriter _ **" Planner & Executor J J L J C of System Configuration â J # Figure 1:
2308.05481#1
2308.05481#3
2308.05481
[ "2307.07924" ]
2308.05481#3
LLM As DBA
LLM As DBA 1 INTRODUCTION Limitations of DBAs. Currently, most companies still rely on DBAs for database maintenance (DM, e.g., tuning, configuring, diagnosing, optimizing) to ensure high performance, availability and reliability of the databases. However, there is a significant gap between DBAs and DM tasks. First, it takes a long time to train a DBA. There are numerous relevant documents (e.g., administrator guides), which can span over 10,000 pages for just one database product and consumes DBAs several years to partially grasp the skills by applying in real practice. Second, it is hard to obtain enough DBAs to manage a large number of database instances, e.g. millions of instance on cloud databases. Third, a DBA may not provide in-time response in emergent cases (especially for correlated issues across multiple database modules) and cause great financial losses. Limitations of Database Tools. Many database products are equipped with semi-automatic maintenance tools to relieve the pressure of human DBAs [5, 6, 10â 12]. However, they have sev- eral limitations. First, they are built by empirical rules [4, 24] or small-scale ML models (e.g., classifiers [13]), which have poor text processing capability and cannot utilize available documents to answer basic questions. Second, they cannot flexibly generalize to scenario changes. For empirical methods, it is tedious to man- ually update rules by newest versions of documents. And learned methods require costly model retraining and are not suitable for online maintenance. Third, they cannot reason the root cause of an anomaly like DBAs, such as looking up more system views based on the initial analysis results. This capability is vital to detect useful information in complex cases.
2308.05481#2
2308.05481#4
2308.05481
[ "2307.07924" ]
2308.05481#4
LLM As DBA
Our Vision: A Human-Beyond Database Adminstrator. To this end, we aim to build a human-beyond â DBAâ that can tirelessly learn from documents (see Figure 1), which, given a set of documents, automatically (1) learns experience from documents, (2) obtains status metrics by interacting with the database, (3) reasons about possible root causes with the abnormal metrics, and (4) accordingly gives optimization advice by calling proper tools. Challenges. Recent advances in Large Language Models (LLMs) have demonstrated superiority in understanding natural language, generating basic codes, and using external tools. However, leverag- ing LLM to design a â human-beyond DBAâ is still challenging. (1) Experience learning from documents. Just like human learners taking notes in classes, although LLMs have undergone training on vast corpus, important knowledge points (e.g., diagnosis experience) cannot be easily utilized without careful attention. However, most texts are of long documents (with varying input lengths and section correlations) and different formats of the extracted experience can greatly affect the utilization capability of the LLM. (2) Reasoning by interacting with database. With the extracted experience, we need to inspire LLM to reason about the given anomalies. Different from basic prompt design in machine learning, database diagnosis is an interactive procedure with the database (e.g., looking up system views or metrics). However, LLM responses are often untrustworthy (â hallucinationâ problem), and it is critical to design strategies that guide LLM to utilize proper interfaces of the database and derive reasonable analysis. xxxx (3) Mechanism for communication across multiple LLMs. Similar to human beings, one LLM alone may be stuck in sub-optimal solutions, and it is vital to derive a framework where multiple LLMs collaborate to tackle complex database problems. By pooling their collective intelligence, these LLMs can provide comprehensive and smart solutions that a single LLM or even skilled human DBA would struggle to think out. Idea of LLM as DBA. Based on above observations, we introduce D-Bot, an LLM based database administrator. First, D-Bot trans- forms documents into experiential knowledge by dividing them into manageable chunks and summarizing them for further extraction of maintenance insights with LLM. Second, it iteratively generates and assesses different formats of task descriptions to assist LLM in understanding the maintenance tasks better.
2308.05481#3
2308.05481#5
2308.05481
[ "2307.07924" ]
2308.05481#5
LLM As DBA
Third, D-Bot utilizes external tools by employing matching algorithms to select appro- priate tools and providing LLM with instructions on how to use the APIs of selected tools. Once equipped with the experience, tools, and input prompt, LLM can detect anomalies, analyze root causes, and provide suggestions, following a tree of thought strategy to re- vert to previous steps if a failure occurs. Moreover, D-Bot promotes collaborative diagnosis by allowing multiple LLMs to communicate based on predefined environmental settings, inspiring more robust solutions via debate-like communications. Contributions. We make the following contributions. (1) We design a LLM-centric database maintenance framework, and explore potential to overcome limitations of traditional strategies. (2) We propose an effective data collection mechanism by (ð ) de- tecting experiential knowledge from documents and (ð ð ) leveraging external tools with matching algorithms. (3) We propose a root cause analysis method that utilizes LLM and tree search algorithm for accurate diagnosis. (4) We propose an innovative concept of collaborative diagnosis among LLMs, thereby offering more comprehensive and robust solutions to complex database problems. (5) Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes.
2308.05481#4
2308.05481#6
2308.05481
[ "2307.07924" ]
2308.05481#6
LLM As DBA
# 2 PRELIMINARIES Database Anomalies. In databases, there are five common prob- lems that can negatively affect the normal execution status. (1) Running Slow. The database exhibits longer response time than expectancy, leading to bad execution performance. (2) Full Disk Capacity. The databaseâ s disk space is exhausted, preventing it from storing new data. (3) Execution Errors. The database experiences errors, potentially due to improper error handling in the application (e.g., leaking sensitive data or system details) or issues within data- base (e.g., improper data types). (4) Hanging. The database becomes unresponsive, which is usually caused by long-running queries, deadlocks, or resource contention. (5) Crashing. The database un- expectedly shuts down, causing data inaccessible. For a mature database product, each anomaly type is explained in the documenta- tion and suitable to be learned by LLMs. Observation Tools for Anomaly Detection. â Observability of the databaseâ is vital to detect above anomalies, including logs, met- rics, and traces. (1) Logs are records of database events. For example, PostgresSQL supports slow query logs (with error messages that can help debug and solve execution issues), but these logs may
2308.05481#5
2308.05481#7
2308.05481
[ "2307.07924" ]
2308.05481#7
LLM As DBA
2 record a large scale of data and are generally not enabled in on- line stage. (2) Metrics capture the aggregated database and system statistics. For example, views like pg_stat_statements record the templates and statistics of slow queries; tools like Prometheus [20] provide numerous monitoring metrics, making it possible to cap- ture the real time system status. (3) Traces provide visibility into how requests behave during executing in the database. Different from logs that help to identify the database problem, traces help to locate the specific abnormal workload or application. Optimization Tools for Anomaly Solving. Users mainly con- cern how to restore to the normal status after an anomaly occurs. Here we showcase some optimization tools. (1) For slow queries, since most open-source databases are weak in logical transfor- mation, there are external engines (e.g., Calcite with â ¼120 query rewrite rules) and tuning guides (e.g., Oracle with over 34 trans- formation suggestions) that help to optimize slow queries. (2) For knob tuning, many failures (e.g., max_connections in Postgres) or bad performance (e.g., memory management knobs) are cor- related with database knobs (e.g., for a slow workload, incresae innodb_buffer_pool_size in MySQL by 5% if the memory usage is lower than 60%). Similarly, there are index tuning rules that generate potentially useful indexes (e.g., taking columns within the same predicate as a composite index). Besides, we can utilize more advanced methods, such as selecting among heuristic meth- ods [3, 21, 22] and learned methods [7â 9, 15, 23, 25, 26] for problems like index lacking, which is not within the scope of this paper. We aim to design D-Bot, an LLM-based DBA, for automatically diagnosing the database anomalies and use LLM to directly (or call appropriate tools to indirectly) provide the root causes. # 3 THE VISON OF D-BOT Existing LLMs are criticized for problems like â Brain in a Vatâ [14]. Thus, it is essential to establish close connections between LLMs and the target database, allowing us to guide LLMs in effectively maintaining the databaseâ s health and functionality. Hence, we propose D-Bot, which is composed of two stages.
2308.05481#6
2308.05481#8
2308.05481
[ "2307.07924" ]
2308.05481#8
LLM As DBA
First, in preparation stage, D-Bot generates experience (from documents) and prompt template (from diagnosis samples), which are vital to guide online maintenance. â ¢ Documents â Experience. Given a large volume of diverse, long, unstructured database documents (e.g., database man- ual, white paper, blogs), we first split each document into chunks that can be processed by the LLM. To aggregate correlated chunks together (e.g., chunk ð £ð that explains the meaning of â bloat-tableâ and chunk ð £ ð that utilizes â bloat- tableâ in root cause analysis), we generate a summary for each chunk based on both its content and its subsections. Finally, we utilize LLM to extract maintenance experience from chunks with similar summaries (Section 4).
2308.05481#7
2308.05481#9
2308.05481
[ "2307.07924" ]
2308.05481#9
LLM As DBA
â ¢ Prompt Template Generation. To help LLM better under- stand the DM tasks, we iteratively generate and score dif- ferent formats of task descriptions using DM samples (i.e., given the anomaly and solutions, ask LLM to describe the task), and adopt task description that both scores high perfor- mance and is sensible to human DBAs (in cases of learning bias) for LLM diagnosis (Section 5). Document Documents > Experience Document i . (1) Overat Performance Analysis, :
2308.05481#8
2308.05481#10
2308.05481
[ "2307.07924" ]
2308.05481#10
LLM As DBA
Chunk â Extraction [11] Problem Description â Summary Problem Symptoms refer CPU Usage Views (S)view anaivsis 2.2}cPU Usage Views Segmentation External Tool Learning APIS (db, pg_activities} [dbmind, rewrite] | Database & Platform Website Tool Retrieval Query & Answer ® Prompt Template Generation Prompt Generation Prompt Scoring : Collaborative Diagnosis Task Description â Score, Role Chief ie teresa drtahane performance â problem. Dlagnosethe caumer | OO Allocation Yi" bea and give optimization suggestions. Ina company whose databases meet anomalies. It depends on youtto diagnose the root causes. â sae Y O WS Environment Setting {order: sequential, visibility: all} Chat Summary Summarize history chats;Outline tool inputs, results, goals. Figure 2: Overview of D-Bot Second, in maintenance stage, given an anomaly, D-Bot iter- atively reasons the possible root causes by taking advantages of external tools and multi-LLM communications.
2308.05481#9
2308.05481#11
2308.05481
[ "2307.07924" ]
2308.05481#11
LLM As DBA
â ¢ External Tool Learning. For a given anomaly, D-Bot first matches relevant tools using algorithms like Dense Retrieval. Next, D-Bot provides the tool APIs together with their de- scriptions to the LLM (e.g., function calls in GPT-4). After that, LLM can utilize these APIs to obtain metric values or op- timization solutions. For example, in PostgresSQL, LLM can acquire the templates of slowest queries in the pg_activity view. If these queries consume much CPU resource (e.g., over 80%), they could be root causes and optimized with rewriting tool (Section 6). cause can affect the database performance (e.g., the performance hazards of many dead tuples); â metricsâ provide hints of matching with this experience segment, i.e., LLM will utilize this experience if the abnormal metrics exist in the â metricsâ field; â stepsâ
2308.05481#10
2308.05481#12
2308.05481
[ "2307.07924" ]
2308.05481#12
LLM As DBA
provide the detailed procedure of checking whether the root cause exists by interacting with database (e.g., obtaining the ratio of dead tuples and live tuples from table statistics views). 1 "name": "many_dead_tuples", 2 "content": "If the accessed table has too many dead tuples, it can cause bloat-table and degrade performance", 3 "metrics": ["live_tuples", "dead_tuples", "table_size", " dead_rate"], â ¢ LLM Diagnosis. Although LLM can understand the func- tions of tool APIs, it still may generate incorrect API requests, leading to diagnosis failures. To solve this problem, we em- ploy the tree of thought strategy, where LLM can go back to previous steps if the current step fails. It significantly increases the likelihood of LLMs arriving at reasonable di- agnosis results (Section 7).
2308.05481#11
2308.05481#13
2308.05481
[ "2307.07924" ]
2308.05481#13
LLM As DBA
â ¢ Collaborative Diagnosis. A single LLM may execute only the initial diagnosis steps and end up early, leaving the prob- lem inadequately resolved. To address this limitation, we propose the use of multiple LLMs working collaboratively. Each LLM plays a specific role and communicates by the en- vironment settings (e.g., priorities, speaking orders). In this way, we can enable LLMs to engage in debates and inspire more robust solutions (Section 8). # 4 EXPERIENCE DETECTION FROM DOCUMENTS Document learning aims to extract experience segments from tex- tual sources, where the extracted segments are potentially useful in different DM cases. For instance, when analyzing the root causes of performance degradation, LLM utilizes the â many_dead_tuplesâ experience to decide whether dead tuples have negatively affected the efficiency of index lookup and scans. Desired Experience Format. To ensure LLM can efficiently uti- lize the experience, each experience fragment should include four fields. As shown in the following example, â nameâ helps LLM to understand the overall function; â contentâ explains how the root 4 "steps":
2308.05481#12
2308.05481#14
2308.05481
[ "2307.07924" ]
2308.05481#14
LLM As DBA
"For each accessed table, if the total number of live tuples and dead tuples is within an acceptable limit (1000), and table size is not too big (50MB), it is not a root cause. Otherwise, if the dead rate also exceeds the threshold (0.02), it is considered a root cause. And we suggest to clean up dead tuples in time." LLM for Experience Detection. It aims to detect experience seg- ments that follow above format. Since different paragraphs within a long document may be correlated with each other (e.g., the concept of â bloat-tableâ appearing in â many_dead_tuplesâ is introduced in another section), we explain how to extract experience segments without losing the technical details. Step1: Segmentation. Instead of partitioning documents into fixed- length segments, we divide them based on the structure of the sec- tion structures and their content. Initially, the document is divided into chunks using the section separators. If a chunk exceeds the maximum chunk size (e.g., 1k tokens), we further divide it recur- sively into smaller chunks. Step2: Chunk Summary. Next, for each chunk denoted as ð ¥, a summary ð ¥ .ð ð ¢ð ð ð ð ð ¦ is created by feeding the content of ð ¥ into LLM with a summarization prompt ð ð ð ¢ð ð ð ð ð ð §ð : ð ð ð ¢ð ð ð ð ð ð §ð
2308.05481#13
2308.05481#15
2308.05481
[ "2307.07924" ]
2308.05481#15
LLM As DBA
= Summarize the provided chunk briefly · · · Your summary will serve as an index for others to find technical details related to database maintenance · · · Pay attention to examples even if the chunks covers other topics. The generated ð ¥ .ð ð ¢ð ð ð ð ð ¦ acts as a textual index of ð ¥, enabling the matching of chunks containing similar content. 3 Step3: Experience Extraction. Once the summaries of the chunks are generated, LLM parses the content of each chunk and compares it with the summaries of other chunks having similar content, which is guided by the extraction prompt ð
2308.05481#14
2308.05481#16
2308.05481
[ "2307.07924" ]
2308.05481#16
LLM As DBA
ð ð ¥ð ¡ð ð ð ð ¡ . This way, experience segments that correlate with the key points from the summaries are detected. ð ð ð ¥ð ¡ð ð ð ð ¡ = Given a chunk summary, extract diagnosis experi- ence from the chunk. If uncertain, explore diagnosis experience in chunks with similar summaries. In our implementation, given a document, we use LLM to extract experience segments into the above 4-field format. Detected Maintenance Experience. In Figure 3, we showcase the simplified diagnosis procedure together with some necessary details, coming from chunks originally in different sections of the given documents (e.g., the maintenance guide with over 100 pages). 1. Background Understanding.
2308.05481#15
2308.05481#17
2308.05481
[ "2307.07924" ]
2308.05481#17
LLM As DBA
Itâ s crucial to grasp the con- text of system performance, such as recent changes in customer expectation, workload type, or even system settings. 2. Database Pressure Checking. This step identifies data- base bottlenecks, such as tracking CPU usage and active ses- sions; and monitoring system views (e.g., pg_stat_activity and pgxc_stat_activity) to focus on non-idle sessions. 3. Application Pressure Checking. If there is no apparent pressure on the database or the resource consumption is very low (e.g., CPU usage below 10% and only a few active sessions), it is suggested to investigate the application side, such as exhausted ap- plication server resources, high network latency, or slow processing of queries by application servers. 4. System Pressure Checking. The focus shifts to examining the system resources where the database is located, including CPU usage, IO status, and memory consumption. 5. Database Usage Checking. Lastly, we can investigate sub- optimal database usage behaviors, such as (1) addressing concur- rency issues caused by locking waits, (2) examining database con- figurations, (3) identifying abnormal wait events (e.g., io_event), (4) tackling long/short-term performance declines, and (5) optimizing poorly performing queries that may be causing bottlenecks. # 5 DIAGNOSIS PROMPT GENERATION Instead of directly mapping extracted experience to new cases, next we explore how to teach LLMs to (1) understand the database maintenance tasks and (2) reason over the root causes by itself. Input Enrichment. With a database anomaly ð ¥ as input, we can enrich ð ¥ with additional description information so called input prompt ð ¥ â ². On one hand, ð ¥ â ² helps LLM to better understand the task intent. On the other hand, since database diagnosis is generally a complex task that involves multiple steps, ð ¥ â ² preliminarily implies how to divide the complex task into sub-tasks in a proper order, such that further enhancing the reasoning of LLM. From our observation, the quality of ð ¥ â ² can greatly impact the performance of LLM on maintenance tasks [27] (Figure 2). Thus, we first utilize LLM to suggest candidate prompts based on a small set of input-output pairs (e.g., 5 pairs for a prompt).
2308.05481#16
2308.05481#18
2308.05481
[ "2307.07924" ]
2308.05481#18
LLM As DBA
Second, we rank these generated prompts based on a customized scoring function 4 (e.g., the ratio of detected root causes), and reserve the best prompts (e.g., top-10) as candidates. Finally, we select the best one to serve as the input prompt template for the incoming maintenance tasks. # 6 EXTERNAL TOOL LEARNING As we know, the efficient use of tools is a hallmark of human cognitive capabilities [17, 18]. When human beings encounter a new tool, they start to understand the tool and explore how it works, i.e., taking it as something with particular functions and trying to understand what the functions are used for. Likewise, we aim to inspire similar ability within LLM. Tool Retrieval. We first retrieve the appropriate tools for the diagnosis task at hand, represented as ð ·ð ¡ .
2308.05481#17
2308.05481#19
2308.05481
[ "2307.07924" ]
2308.05481#19
LLM As DBA
There are several methods used, such as BM25, LLM Embeddings, and Dense Retrieval. (1) BM25, simply represented as ð (ð ·ð ¡ , ð ) = BM25, is a common probabilistic retrieval method that ranks tool descriptions (ð ·) based on their relevance to the given anomaly (ð ) [19]. (2) LLM Embeddings, denoted as ð (ð ·ð ¡ , ð ¿) = ð ¿ð ¿ð ð ¸ , are a method that converts tool descriptions (ð ·ð ¡ ) into embeddings (ð ¸ð ¡ ) using LLM, i.e., ð ¸ð ¡ = ð ¿(ð ·ð ¡ ).
2308.05481#18
2308.05481#20
2308.05481
[ "2307.07924" ]
2308.05481#20
LLM As DBA
These embeddings capture the semantic meanings in a multi-dimensional space, hence helping in finding related tools even in the absence of keyword overlap, ð ·ð ¡ = ð ¿ð ¿ð ð ¸ (ð ¸ð ¡ ). (3) Dense Retrieval, denoted as ð (ð , ð ·ð ¡ , ð ) = ð ·ð , uses neural net- works (ð ) to generate dense representations of both the anomaly (ð ) and the tool descriptions (ð ·ð ¡ ), separately denoted as Denseð and Denseð · . To retrieve the relevant tools, we calculate the simi- larity between Denseð and Denseð · , and rank them based on these similarity scores.
2308.05481#19
2308.05481#21
2308.05481
[ "2307.07924" ]
2308.05481#21
LLM As DBA
The proper method for tool retrieval depends on the specific scenarios. BM25 is efficient for quick results with large volumes of API descriptions in the tools and clear anomaly characters. LLM Embeddings excel at capturing semantic and syntactic relationships, which is especially useful when relevance isnâ t obvious from key- words (e.g., different metrics with similar functions). Dense Retrieval is ideal for vague anomaly, which captures context and semantic meaning, but is more computational costly. # 7 LLM DIAGNOSIS Tree Search Algorithm using LLM. To avoid diagnosis failures caused by the incorrect actions (e.g., non-existent API name) derived by LLM, we propose to utilize the tree of thought strategy that can guide LLM to go back to previous actions if the current action fails. Step1: Tree Structure Initialization. We initialize a tree structure, where root node is the diagnosis request (Figure 4). Utility methods are utilized to manipulate the tree structure, and UCT score for node ð £ are computed based on the modifications during planning, i.e., ð (ð £) , where ln(ð ) UCT(ð £) = ð (ð £) denotes the selection frequency and ð ¤ (ð £) denotes the success ratio of detecting root causes. Note, the action of ð (ð £ fails to call tool API, ð ¤ (ð £) equals -1. Step2: Simulate Execution. This step kickoffs the execution of simulations starting from the root node of the tree. It involves selecting nodes based on specific standard (e.g., detected abnormal metrics). If the criteria for selecting a new node is met, a new node is chosen; otherwise, the node with the highest UCT value is selected.
2308.05481#20
2308.05481#22
2308.05481
[ "2307.07924" ]
2308.05481#22
LLM As DBA
[CPU usage] + OPS CPU usage rate or top/sar commands; + If you still cannot analyze the cause of high CPU usage, generate a flame graph to find the bottleneck of the kernel code function. 110 issues] + Hard disk cache/raid write policy configuration problem; + Disk bandwidth is throttled (OBS has flow control). [Memory issues] + Refer to high memory section [Network issues] + Network disconnection; + High network latency; + Full network bandwidth. (2) Check the database-related views, or the relevant indicators to clarify whether the issue is on the database side. [Database-related views] E.g., pg_stat_activity/pgxc_stat_activity views in Postgres - Pay attention to the sessions where the state is not idle. [OPS monitoring indicators] E.g., CPU usage rate, active sessions. 2. Database Pressure Checking 1. Background Understanding 3. Application Pressure Checking 4. System Pressure Checking 5. Database Usage Checking [Concurrency issues] Lock waits due to concurrent updates. [Configuration issues] Knobs like shared_buffers/threadPool. [Long-term performance] Compare performance diagnosis reports (e.g, ,Traces in Oracle, WDR in gaussdb) for different time periods. [Short-term performance jitters] Seconds-level performance jitter; Inefficient SQL: large volume of slow SQL, slow SQL optimization. ® If the database side does not significantly perceive the workload pressure, and resource usage is extremely low (e.g, CPU usage is less than 10%), it is recommended to check the application side: [Possible problems] - Application server resources are exhausted, e.g., insufficient CPU / 10 / memory; + The network latency between the application server and the kernel is too high; + The application server processes query requests slowly, leading to slow dispatch of the query statements. @ Figure 3: The outline of diagnosis experience extracted from documents. Diagnose the root em_usageâ ): memory is abnormal FA ¢pu_diagnosis_agent({"dead_tuples *: 5000, ...}):many_dead_tuples # Figure 4: Example LLM diagnosis by tree of thought is dedicated to a distinct domain of functions.
2308.05481#21
2308.05481#23
2308.05481
[ "2307.07924" ]
2308.05481#23
LLM As DBA
For example, we include three LLM agents in the initial implementation: (1) Chief DBA is responsible for collaboratively diagnosing and detecting root causes with other agents; (2) CPU Agent is specialized in CPU usage analysis and diagnosis, and (3) Memory Agent focuses on memory usage analysis and diag- nosis. Each LLM agent can automatically invoke tool APIs to retrieve database statistics, extract external knowledge, and conduction optimizations. For instance, CPU Agent utilizes the monitoring tool Prometheus to check CPU usage metrics within specific time periods, and determine the root causes of high CPU usage by matching with extracted experience (Section 4). Note, if CPU/memory agents cannot report useful analysis, Chief DBA is responsible to detect other potential problems, such as those on the application side. Step3: Existing Node Reflection. For each node in the path from the root node to the selected node, reflections are generated based on decisions made at previous nodes.
2308.05481#22
2308.05481#24
2308.05481
[ "2307.07924" ]
2308.05481#24
LLM As DBA
For example, we count on LLM to rethink the benefits of analyzing non-resource relevant metrics. If LLM decides the action cannot find any useful information, the UCT value will be reduced and set to that of its parent node. In this way, we can enhance the diagnosis efficiency. Step4: Terminal Condition. If LLM cannot find any more root cause (corresponding to a leaf node) for a threshold time (e.g., five), the algorithm ends and LLM outputs the final analysis based on the detected root causes. # 8 COLLABORATIVE DIAGNOSIS FOR # COMPLEX CASES A single LLM may be limited in its ability to fully resolve a problem (e.g., stuck in initial steps). Collaborative diagnosis involves the utilization of multiple LLMs to collectively address complex cases by leveraging their unique role capabilities. This section introduces the communicative framework for database diagnosis [1, 16].
2308.05481#23
2308.05481#25
2308.05481
[ "2307.07924" ]
2308.05481#25
LLM As DBA
â ¢ Agents. In the communicative framework, agents can be undertaken by human beings or LLMs. Humans can pro- vide LLM agents with scenario requirements (e.g., business changes over the incoming period) and prior knowledge (e.g., historical anomalies). On the other hand, each LLM agent â ¢ Environment Settings. We need to set a series of principles for the agents to efficiently communicate, such as (1) Chat Order: To avoid the mutual negative influence, we only al- low one LLM agent to â speakâ (i.e., appending the analysis results to the chat records to let other agents know) at a time. To ensure flexible chat (e.g., if an agent cannot detect anything useful, it should not speak), we rely on Chief DBA to decide which agent to speak in each iteration (diagnosis scheduling); (2) Visibility: By default, we assume the analysis results of agents can be seen by each other, i.e., within the same chat records. In the future, we can split agents into different groups, where each group is in charge of different database clusters/instances and they do not share the chat records; (3) Selector is vital to filter invalid analysis that may mislead the diagnosis directions; (4) Updater works to update agent memory based on the historical records.
2308.05481#24
2308.05481#26
2308.05481
[ "2307.07924" ]
2308.05481#26
LLM As DBA
â ¢ Chat Summary . For a complex database problem, it re- quires agents dozens of iterations to give in-depth analy- sis, leading to extremely long chat records. Thus, it is vi- tal to effectively summarize the critical information from chat records without exceeding the maximal length of LLM prompts. To the end, we progressively summarize the lines 5 # Table 1: Diagnosis performance of single root causes ( : legal diagnosis results; : accurate diagnosis results). Type Data Insert Slow Query Concurrent Transaction Root Cause INSERT_LARGE_DATA FETCH_LARGE_DATA REDUNDANT_INDEX LACK_STATISTIC_INFO MISSING_INDEXES POOR_JOIN_PERFORMANCE CORRELATED_SUBQUERY LOCK_CONTENTION WORKLOAD_CONTENTION Workload concentration affecting SQL execution CPU_CONTENTION IO_CONTENTION Description Long execution time for large data insertions Fetching of large data volumes Unnecessary and redundant indexes in tables Outdated statistical info affecting execution plan Missing indexes causing performance issues Poor performance of Join operators Non-promotable subqueries in SQL Lock contention issues Severe external CPU resource contention IO resource contention affecting SQL performance LLM+Metrics D-Bot of a record used with tools, including inputs for certain tools and the results returned by these tools. Based on the current summary, it extracts the goals intended to be solved with each call to the tool, and forms a new summary, e.g., [Current summary] - I know the start and end time of the anomaly. [New Record] Thought: Now that I have the start and end time of the anomaly, I need to diagnose the causes of the anomaly Action: is_abnormal_metric Action Input: 1684600074, â metric_nameâ : â cpu_usageâ } Observation: â The metric is abnormalâ [New summary] - I know the start and end time of the anomaly. - I searched for is_abnormal_metric, and I now know that the CPU usage is abnormal. & [Chief DBA: The identified root causes of the anomaly are high CPU usage due toa high number of running processes, high load average, blocked processes, and low entropy, as well as high memory usage caused by poor join performance and significant amounts of inactive anonymous memory. CPU Agent suggestions: the running processes, load average, blocked processes, and low entropy to optimize CPU usage. Memory Agent suggestions: 1. Check the GUC parameter 'enable_hashj and ensure thatiit is set to â onâ 2.
2308.05481#25
2308.05481#27
2308.05481
[ "2307.07924" ]
2308.05481#27
LLM As DBA
Optimize the SQL structure to reduce JOIN cost and consider using temporary tables to filter data. 3. processes or applications running on the system that may be causing the high memory usage and ensure they are property releasing memory when finished. Reset Next Stop Autoplay Start Autoplay Optimization Solutions: Rewrite Slow Query [Memory Agent]: The high memory usage is caused by poor join performance and significant amounts Add Query Hints of inactive anonymous memory. The slow queries involve multiple join operations with large amounts of data, Gather More Info With this communicative framework and well-defined communi- cation principles, the collaborative diagnosis process among human and LLM agents becomes more efficient (e.g., parallel diagnosis) and effective (e.g., chat records could trigger investigating of in-depth metric observation and root cause analysis). # 9 PRELIMINARY EXPERIMENT RESULTS Demonstration. As illustrated in Figure 5, Chief DBA monitors the status of the database to detect anomalies. Upon recognizing a new anomaly, Chief DBA notifies both the Memory Agent and CPU Agent. These agents independently assess the potential root causes and communicate their findings (the root causes and recommended solutions) to the Chief DBA. Subsequently, the Chief DBA consol- idates the diagnostic results for the userâ
2308.05481#26
2308.05481#28
2308.05481
[ "2307.07924" ]
2308.05481#28
LLM As DBA
s convenience. In initial iterations, these agents generally gather limited information, and so they will continue for multiple iterations until the conclusion of Chief DBA is nearly certain or no further valuable information can be obtained. Additionally, during the diagnosis, users have the option to participate by offering instructions and feedback, such as verifying the effectiveness of a proposed optimization solution. # Figure 5: A basic demonstration of D-Bot. Diagnosis Performance Comparison. We compare the perfor- mance of D-Bot against a baseline, namely llm+Metrics. Both of the two methods are deployed with the OpenAI model GPT-4 [2] alongside metrics and views from PostgreSQL and Prometheus. The evaluation focuses on basic single-cause problems as detailed in Table 1. Besides, we also offer a multi-cause diagnosis example presented in the Appendix-B. Preliminary results indicate that LLM +Metrics and D-Bot can achieve a high legality rate (producing valid responses to specific database issues). However, it is a â dangerous behaviorâ
2308.05481#27
2308.05481#29
2308.05481
[ "2307.07924" ]
2308.05481#29
LLM As DBA
for LLM +Metrics, which actually has very low success rate (infrequent pro- vision of the correct causes). In contrast, D-Bot achieves both high legal rate and success rate. The reasons are three-fold. First, LLM +Metrics conducts very basic reasoning and often misses key causes. For example, for the INSERT_LARGE_DATA case, LLM +Metrics only finds â high number of running processesâ with the node_procs_running metric, and stops early. In contrast, 6 D-Bot not only finds the high concurrency problem, but analyze the operation statistics in the database process and identifies â high memory usage due to heavy use of UPDATE and INSERT operations on xxx tablesâ by looking up the pg_stat_statements view.
2308.05481#28
2308.05481#30
2308.05481
[ "2307.07924" ]
2308.05481#30
LLM As DBA
Second, LLM +Metrics often â makes upâ reasons without substantial knowledge evidence. For example, for the CORRE- LATED_SUBQUERY case, LLM +Metrics observes SORT operations in logged queries, and incorrectly attributes the cause to â frequent reading and sorting of large amount of dataâ , thereby ending the diagnostic process. Instead, D-Bot cross-references with the query optimization knowledge, and then finds the correlated-subquery structure might be the performance bottleneck, with additional extracted information like estimated operation costs. Third, LLM +Metrics meet trouble in deriving appropriate solu- tions. LLM +Metrics often gives very generic optimization solutions (e.g., â resolve resource contention issuesâ ), which are useless in practice. Instead, leveraging its tool retrieval component, D-Bot can learn to give specific optimization advice (e.g., invoking query transformation rules, adjusting the work_mem parameter) or gather more insightful information (e.g., â calculate the total cost of the plan and check whether the cost rate of the sort or hash operators exceeds the cost rate thresholdâ ). This evaluation reveals the potential of D-Bot in going beyond mere anomaly detection to root cause analysis and provision of actionable suggestions. Despite these advancements, from the basic deployment of D-Bot, there are still some unresolved challenges. First, it is tricky to share the maintenance experience (e.g., varying metric and view names) across different database products. Second, it is labor-intensive to adequately prepare extensive number of anomaly-diagnosis data, which is essential to fine-tune and direct less-capable LLMs (e.g., those smaller than 10B) to understand the complex database knowledge and apply in maintenance. # 10 CONCLUSION In this paper, we propose a vision of D-Bot, an LLM-based data- base administrator that can continuously acquire database main- tenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. We will continue to complete and improve this work with our collaborators. REFERENCES [1] [n.d.]. https://github.com/OpenBMB/AgentVerse. Last accessed on 2023-8. [2] [n.d.]. https://openai.com/. Last accessed on 2023-8. [3] Surajit Chaudhuri and Vivek R.
2308.05481#29
2308.05481#31
2308.05481
[ "2307.07924" ]
2308.05481#31
LLM As DBA
Narasayya. 1997. An Efficient Cost-Driven Index Selection Tool for Microsoft SQL Server. In VLDB. 146â 155. [4] Karl Dias, Mark Ramacher, Uri Shaft, Venkateshwaran Venkataramani, and Gra- ham Wood. 2005. Automatic Performance Diagnosis and Tuning in Oracle. In Second Biennial Conference on Innovative Data Systems Research, CIDR 2005, Asilo- mar, CA, USA, January 4-7, 2005, Online Proceedings. www.cidrdb.org, 84â 94. http://cidrdb.org/cidr2005/papers/P07.pdf [5] Shiyue Huang, Ziwei Wang, Xinyi Zhang, Yaofeng Tu, Zhongliang Li, and Bin Cui. 2023. DBPA: A Benchmark for Transactional Database Performance Anomalies. Proc. ACM Manag. Data 1, 1 (2023), 72:1â 72:26. https://doi.org/10.1145/3588926 [6] Prajakta Kalmegh, Shivnath Babu, and Sudeepa Roy. 2019. iQCAR: inter-Query Contention Analyzer for Data Analytics Frameworks. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD Conference 2019, Ams- terdam, The Netherlands, June 30 - July 5, 2019, Peter A. Boncz, Stefan Manegold, Anastasia Ailamaki, Amol Deshpande, and Tim Kraska (Eds.). ACM, 918â 935. https://doi.org/10.1145/3299869.3319904 [7] Jan Kossmann, Alexander Kastius, and Rainer Schlosser. 2022. SWIRL: Selection of Workload-aware Indexes using Reinforcement Learning. In EDBT. 2:155â 2:168. 7 [8] Hai Lan, Zhifeng Bao, and Yuwei Peng. 2020. An Index Advisor Using Deep Reinforcement Learning. In CIKM. 2105â 2108.
2308.05481#30
2308.05481#32
2308.05481
[ "2307.07924" ]
2308.05481#32
LLM As DBA
[9] Gabriel Paludo Licks, Júlia Mara Colleoni Couto, Priscilla de Fátima Miehe, Re- nata De Paris, Duncan Dubugras A. Ruiz, and Felipe Meneguzzi. 2020. SmartIX: A database indexing agent based on reinforcement learning. Appl. Intell. 50, 8 (2020), 2575â 2588. [10] Ping Liu, Shenglin Zhang, Yongqian Sun, Yuan Meng, Jiahai Yang, and Dan Pei. 2020. FluxInfer: Automatic Diagnosis of Performance Anomaly for Online Database System. In 39th IEEE International Performance Computing and Commu- nications Conference, IPCCC 2020, Austin, TX, USA, November 6-8, 2020. IEEE, 1â 8. https://doi.org/10.1109/IPCCC50635.2020.9391550 [11] Xiaoze Liu, Zheng Yin, Chao Zhao, Congcong Ge, Lu Chen, Yunjun Gao, Dimeng Li, Ziting Wang, Gaozhong Liang, Jian Tan, and Feifei Li. 2022. PinSQL: Pinpoint Root Cause SQLs to Resolve Performance Issues in Cloud Databases.
2308.05481#31
2308.05481#33
2308.05481
[ "2307.07924" ]
2308.05481#33
LLM As DBA
In 38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12, 2022. IEEE, 2549â 2561. https://doi.org/10.1109/ICDE53745.2022.00236 [12] Xianglin Lu, Zhe Xie, Zeyan Li, Mingjie Li, Xiaohui Nie, Nengwen Zhao, Qingyang Yu, Shenglin Zhang, Kaixin Sui, Lin Zhu, and Dan Pei. 2022. Generic and Robust Performance Diagnosis via Causal Inference for OLTP Database Systems. In 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing, CCGrid 2022, Taormina, Italy, May 16-19, 2022. IEEE, 655â 664. https://doi.org/10.1109/ CCGrid54584.2022.00075 [13] Minghua Ma, Zheng Yin, Shenglin Zhang, and et al. 2020. Diagnosing Root Causes of Intermittent Slow Queries in Large-Scale Cloud Databases. Proc. VLDB Endow. 13, 8 (2020), 1176â 1189. https://doi.org/10.14778/3389133.3389136 [14] Yuxi Ma, Chi Zhang, and Song-Chun Zhu. 2023. Brain in a Vat: On Miss- ing Pieces Towards Artificial General Intelligence in Large Language Mod- els. CoRR abs/2307.03762 (2023). https://doi.org/10.48550/arXiv.2307.03762 arXiv:2307.03762 [15] R.
2308.05481#32
2308.05481#34
2308.05481
[ "2307.07924" ]
2308.05481#34
LLM As DBA
Malinga Perera, Bastian Oetomo, Benjamin I. P. Rubinstein, and Renata Borovica-Gajic. 2021. DBA bandits: Self-driving index tuning under ad-hoc, analytical workloads with safety guarantees. In ICDE. 600â 611. [16] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, and et al. 2023. Com- municative Agents for Software Development. arXiv preprint arXiv:2307.07924 (2023). [17] Yujia Qin, Shengding Hu, Yankai Lin, and et al. 2023. Tool learning with founda- tion models. arXiv preprint arXiv:2304.08354 (2023). [18] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. Tool- LLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. arXiv:cs.AI/2307.16789 [19] Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333â 389. [20] James Turnbull. 2018.
2308.05481#33
2308.05481#35
2308.05481
[ "2307.07924" ]
2308.05481#35
LLM As DBA
Monitoring with Prometheus. Turnbull Press. [21] Gary Valentin, Michael Zuliani, Daniel C. Zilio, Guy M. Lohman, and Alan Skelley. 2000. DB2 Advisor: An Optimizer Smart Enough to Recommend Its Own Indexes. In ICDE. 101â 110. [22] Kyu-Young Whang. 1987. Index Selection in Relational Databases. Foundations of Data Organization (1987), 487â 500.
2308.05481#34
2308.05481#36
2308.05481
[ "2307.07924" ]
2308.05481#36
LLM As DBA
[23] Wentao Wu, Chi Wang, Tarique Siddiqui, Junxiong Wang, Vivek R. Narasayya, Surajit Chaudhuri, and Philip A. Bernstein. 2022. Budget-aware Index Tuning with Reinforcement Learning. In SIGMOD Conference. 1528â 1541. [24] Dong Young Yoon, Ning Niu, and Barzan Mozafari. 2016. DBSherlock: A Perfor- mance Diagnostic Tool for Transactional Databases. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD Conference 2016, San Francisco, CA, USA, June 26 - July 01, 2016, Fatma à zcan, Georgia Koutrika, and Sam Madden (Eds.). ACM, 1599â 1614. https://doi.org/10.1145/2882903.2915218 [25] Xuanhe Zhou, Chengliang Chai, Guoliang Li, and Ji Sun. 2020. Database meets artificial intelligence:
2308.05481#35
2308.05481#37
2308.05481
[ "2307.07924" ]
2308.05481#37
LLM As DBA
A survey. IEEE Transactions on Knowledge and Data Engi- neering 34, 3 (2020), 1096â 1116. [26] Xuanhe Zhou, Luyang Liu, Wenbo Li, Lianyuan Jin, Shifu Li, Tianqing Wang, and Jianhua Feng. 2022. AutoIndex: An Incremental Index Management System for Dynamic Workloads. In ICDE. 2196â 2208. [27] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large Language Models Are Human-Level Prompt Engineers. (2022). arXiv:2211.01910 http://arxiv.org/abs/2211.01910 # A APPENDIX - PROMPTS Prompts Chief_dba_format_prompt You are in a company whose databases meet anomalies and it depends on you to collaborate with other agents to diagnose the root causes. ${role_description} # Rules and Format Instructions for Response - Must listen and respond to the user's advice in the following format: hought: I now know the advice of the user, and i need to consider it during diagnosis ction: Speak ction Input: ({"diagnose": response to the advice, "solution": [], "knowledge": ""}) You can detect and diagnose anomaly as follows to use tool: hought: (your thought) ction: (an action name, it can be one of [obtain_anomaly_time, Speak]) ction Input: (argument for the action) irst, you need to call the tool api to get the start and end time of an anomaly hought: I need to obtain the start and end time of the anomaly by calling the tool api ction: obtain_anomaly_time ction Input: {"input": "json dict string"} bservation: {"start_time":"xxxx","end_time": "xxxx"} fter obtaining the start and end time of the anomaly, announce it with the following format: hought: I now know the start and end time of the anomaly, and i need to report it to agents Action:
2308.05481#36
2308.05481#38
2308.05481
[ "2307.07924" ]
2308.05481#38
LLM As DBA
Speak Action Input: ({"diagnose": the start and end time of the anomaly you found, "solution": [], "knowledge": ""}) After all the agents have announced the root causes they found, you should summarize all the mentioned root causes and optimization solutions point by point: Thought: I now know the root causes and optimization solutions from other agents, and i need to conclude them point by point Action: Speak Action Input: ({"diagnose": The identified root causes of the anomaly are ..., "solution": The suggested optimization solutions are ..., "knowledge": ""}) Here is the conversation history ${chat_history} Here is the execution log of tools ${tool_observation} - Once an agent has announced the root causes he found, it is your responsibility to memorize the root causes. After that, continue to encourage other agents to diagnose. - When no one speaks in the last round ([Silence] appears in the end of history), you should summarize root causes and optimization solutions point by point. Pay attention to the response format instructions, and strictly follow the above rules! Based on the above history, what will you, ${agent_name}, do next? CPU_agent_format_prompt You are in a company whose databases meet anomalies. Follow the chief DBA's instructions to diagnose the root causes. ${role_description} # Rules and Format Instructions for Response
2308.05481#37
2308.05481#39
2308.05481
[ "2307.07924" ]
2308.05481#39
LLM As DBA
8 - During diagnosis, you have access to the following tools: ${tools} - You can respond as follows to use tool: Thought: (your thought) Action: (an action name, it can be one of [whether_is_abnormal_metric, CPU_diagnosis_agent, Speak], pay attention to the capitalization) Action Input: (argument for the action) You can first determine abnormal metrics by using the tools, and use the following format: Thought: Now that I have obtained the start and end time of the anomaly, check whether the CPU usage is abnormal during that time period. Action: whether_is_abnormal_metric Action Input: {"start_time": 1684646375, "end_time": 1684646378, "metric_name": "cpu_usage"} Next you must diagnose root causes by using the tools, and must use the following format (any other choice is not allowed): Thought: The CPU usage is abnormal, so I need to diagnose the cause of the anomaly using cpu_diagnosis_agent. Action: cpu_diagnosis_agent Action Input: {"start_time": 1684646375, "end_time": 1684646378} After you have got the observation from cpu_diagnosis_agent, announce it to the chief DBA, and use the following format:
2308.05481#38
2308.05481#40
2308.05481
[ "2307.07924" ]
2308.05481#40
LLM As DBA
Thought: I now know the root cause of the anomaly, and i need to report it to the chief DBA Action: Speak Action Input: ({"diagnose": the root causes you found, "solution": the optimization solutions for the root causes splitted by ' ', "knowledge": the diagnosis knowledge you used}) (Note the memory agent owns similar prompt, and we do not show it to avoid redundancy.) Summary_prompt Progressively summarize the lines of a record that you uses tools, which contains inputs for certain tools and the results returned by these tools. Based on the current summary, summarize from the record the goals that you intended to solve with each tool call, add it onto the previous summary, and return a new summary. EXAMPLE Current summary: New lines: Thought:
2308.05481#39
2308.05481#41
2308.05481
[ "2307.07924" ]
2308.05481#41
LLM As DBA
Now that I have the start and end time of the anomaly, I need to diagnose the causes of the anomaly Action: whether_is_abnormal_metric Action Input: {"start_time": 1684600070, "end_time": 1684600074, "metric_name": "cpu_usage"} Observation: "The metric is abnormal" New summary: - I now know the start and end time of the anomaly. - I searched for whether_is_abnormal_metric, and I now know that the CPU usage is abnormal. END OF EXAMPLE Now, try to summarize the following record. Current summary: ${summary} New lines: ${new_lines} New summary:
2308.05481#40
2308.05481#42
2308.05481
[ "2307.07924" ]
2308.05481#42
LLM As DBA
9 # B APPENDIX - TEST CASES Test Case (WORKLOAD_CONTENTION, POOR_JOIN_PERFORMANCE) (First Round) [Chief DBA] The anomaly started at 2023-08-05 23:38:30 and ended at 2023-08-05 23:43:30. [API Call] obtain_anomaly_time. [CPU Agent] The CPU usage is abnormal, so I need to diagnose the cause of the anomaly. [API Call] whether_is_abnormal_metric. [CPU Agent] The CPU relevant metric values are as follows: 1. node_scrape_collector_duration_seconds: This metric measures the duration of data collection from the node exporter. The value of 5.7218e-05 indicates that the data collection process is relatively fast. 2. node_procs_running: This metric represents the number of processes currently executing on the system. A value of 18.0 suggests that there are 18 processes running at the moment. ... 5. node_load1, node_load5, node_load15: These metrics represent the average system load over the last 1, 5, and 15 minutes, respectively. The values of 7.33, 2.4, and 0.89 suggest that the system is heavily loaded. [API Call] obtain_cpu_metrics. [Matched Knowledge] This diagnosis was based on several CPU relevant metrics obtained from Prometheus like node_scrape_collector_duration_seconds, node_procs_running, node_procs_blocked, node_entropy_available_bits, node_load1, node_load5, and node_load15. [CPU Agent] The high CPU usage seems to be caused by the high value of node_load1. It indicates that the CPU has been heavily utilized recently. [API Call] knowledge_matching. [Matched Knowledge] {"cause_name": "workload_contention", "desc": "The function checks for several potential causes of contention, including abnormal CPU and memory resource usage, insufficient space in the database data directory, and excessive connections or thread pool usage. ... If any of these issues are detected, the function provides a detailed report of the problem and suggests potential solutions. If no issues are found, the function returns "not a root cause".", "metrics": "... - max_connections - db_cpu_usage - db_mem_usage - disk_usage - connection - thread_pool_rate'}. [Solution] 1.
2308.05481#41
2308.05481#43
2308.05481
[ "2307.07924" ]
2308.05481#43
LLM As DBA
Monitor the number of running processes and try to reduce unnecessary ones. 2. Look for resource contention and try to optimize resource allocation. [Memory Agent] The memory metric is abnormal. [API Call] whether_is_abnormal_metric. [Memory Agent] We first obtain memory metric values: - The node_memory_Inactive_anon_bytes metric value suggests a significant amount of inactive anonymous memory. - The node_memory_MemFree_bytes metric value indicates a considerable amount of free memory available. - The node_memory_Dirty_bytes metric value indicates a moderate amount of dirty memory. These memory metric values suggest that there may be a memory leak or inefficient memory management within the system. It might be worth investigating the memory allocation and deallocation processes, ensuring that resources are effectively managed. [API Call] obtain_memory_metrics. [Memory Agent] We next obtain slow queries from the pg_stat_statements view. There are potential causes that can be analyzed: 1. Poor join performance: One of the slow queries involves multiple table joins (VSELECT MIN(mi.info) ... FROM cast_info AS ci, info_type AS it1, info_type AS it2, keyword AS k, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, name AS n, title AS t ...\â ). Poor join performance can result in high memory usage if the join operation involves a large amount of data or an inefficient join operator is used. It is recommended to optimize the SQL structure and check if the join operations can be optimized by using appropriate indexes or join operators. [API Call] obtain_slow_queries; knowledge_matching. [Matched Knowledge] {"cause_name": "poor_join_performanceâ , "desc": "There are four main situations: 1) when the GUC parameter â enable_hashjoinâ is set to â offâ , which can result in the optimizer choosing NestLoop or other join operators even when HashJoin would be more suitable; 2) when the optimizer incorrectly chooses the NestLoop operator, even when â set_hashjoinâ is on; 3) when the join operation involves a large amount of data, which can lead to high execution costs; and 4) when the cost of the join operator is expensive. ....â , "metrics": "- total_cost - cost_rate_threshold - nestloop_rows_threshold - large_join_threshola"}; {"cause_name":
2308.05481#42
2308.05481#44
2308.05481
[ "2307.07924" ]
2308.05481#44
LLM As DBA
"missing_index", ...} [Solution] Optimize the SQL structure of the query ".." and check if the join operations can be optimized by using appropriate indexes or join operators. 10 [Memory Agent] Other potential causes: Without additional information about the system and its con- figuration, it is difficult to pinpoint a specific root cause of the high memory usage. Other potential causes may include inefficient query plans, lack of memory optimization settings, or memory leaks in the application or database engine. To further investigate and identify the root cause, it is advisable to perform the following steps: 1. Analyze the resource usage pattern over time to identify any patterns or trends related to the memory usage. 2. Check the configuration settings and optimization parame- ters of the database system to ensure they are properly set. 3. Monitor the system for any memory leaks or inefficient query plans that may be causing the high memory usage. 4. Review the application code and queries to identify any potential optimizations or rewrite queries to improve their efficiency. [API Call] N/A. [Solution] 1. Analyze the resource usage pattern over time to identify any patterns or trends related to the memory usage. 2. Check the configuration settings and optimization parameters of the database system to ensure they are properly set. 3. Monitor the system for any memory leaks or inefficient query plans that may be causing the high memory usage. 4. Review the application code and queries to identify any potential optimizations or rewrite queries to improve their efficiency. [Chief DBA] The identified root causes of the anomaly are high CPU usage due to a high number of running processes, processes waiting for resources, and high load averages. It is also uncertain whether poor join performance is causing high memory usage. [API Call] N/A. [Solution] 1. In order to reduce CPU utilization, it is necessary to investigate the specific processes involved and their resource requirements. 2. For the high memory usage, we need to gather more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant configuration settings. Examining the memory usage patterns and query execution plans might provide further insights. (Second Round) ~ further insights into memory problems [Memory Agent] Chief DBA has suggested that gathering more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant configura- tion settings.
2308.05481#43
2308.05481#45
2308.05481
[ "2307.07924" ]
2308.05481#45
LLM As DBA
This advice is indeed valuable, and I should consider it during the diagnosis. [API Call] N/A. [Memory Agent] Based on the memory metric values and slow queries provided, it is difficult to di- rectly determine the root cause of the high memory usage. However, we can analyze potential causes based on the information given. There is a significant amount of memory being used, particularly by node_memory_Dirty_bytes. Additionally, there are 11 active PostgreSQL activities. This indicates that there may be a memory leak or excessive memory usage in one or more queries or processes. The slow queries provided involve multiple table joins and aggregation functions (\â MIN\). They may be resource- intensive and contribute to the high memory usage. [API Call] knowledge_matching. [Matched Knowledge] {"cause_name": "poor_join_performanceâ , ...} [Solution] 1. Analyze the query plans, check for inefficient join operations, and monitor the memory usage over time. 2. Review the system configuration and resource constraints may provide insights into potential optimizations or improvements. 3. Consult with a database administrator or expert for a more accurate analysis and resolution. [Chief DBA] Based on our analysis, the root causes of the anomaly seem to be high CPU usage due to excessive running processes, poor join performance and inefficient query structure. The high CPU usage is mainly due to a high number of running processes and/or a high CPU load. There are 11 active PostgreSQL activities indicating a memory leak or excessive memory usage in one or more queries or processes. The slow queries provided involve multiple table joins and aggregation functions which may be resource-intensive and contribute to the high memory usage. Poor join performance and inefficient query structure can also lead to unnecessary memory usage. [API Call] N/A. [Solution] 1. Further investigation and analysis of the running processes and system workload that cause severe contention. 2. Review the structure of the slow queries and optimize them to reduce memory consumption. 3. Investigate join operations for possible inefficiencies and optimize as needed. 3. Gather more information about overall system performance, such as overall memory utilization and any other abnormal metrics.
2308.05481#44
2308.05481#46
2308.05481
[ "2307.07924" ]
2308.05481#46
LLM As DBA
11
2308.05481#45
2308.05481
[ "2307.07924" ]
2308.04030#0
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
3 2 0 2 g u A 8 ] I A . s c [ 1 v 0 3 0 4 0 . 8 0 3 2 : v i X r a # Gentopia.AI : A Collaborative Platform for Tool-Augmented LLMs # Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu https://github.com/Gentopia-AI # Abstract Augmented Language Models (ALMs) em- power large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to vary- ing degrees, are deficient in the following crit- ical features: flexible customization, collab- orative democratization, and holistic evalua- tion. We present Gentopia, an ALM frame- work enabling flexible customization of agents through simple configurations, seamlessly in- tegrating various language models, task for- mats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish GentPool, a public platform enabling the reg- istration and sharing of user-customized agents. Agents registered in GentPool are composable such that they can be assembled together for agent collaboration, advancing the democrati- zation of artificial intelligence. To ensure high- quality agents, GentBench, an integral compo- nent of GentPool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release Gentopia on Github1 and will continuously move forward. Several projects and frameworks have been proposed to build tool-Augmented Language including Au- Models (ALMs), or "agents", toGPT (Richards, 2023), SuperAGI (Kondi, 2023), HuggingGPT (Shen et al., 2023), GPT- Engineer (Osika, 2023), LangChain (Chase, 2023), Semantic Kernel (Callegari, 2023), and MiniChain (Rush, 2023).
2308.04030#1
2308.04030
[ "2302.13971" ]
2308.04030#1
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Each of these methods is deficient, to varying degrees, in the following critical features. â ¢ Adaptive Customization: Many are designed for a single set of tasks without extensive sup- port in customization, or they involve redun- dant and boilerplate implementation that un- necessarily complicates agent tuning. â ¢ Tool-augmented NLP Benchmark: A user- customized agent, before registration, is ex- pected to go through a thorough evaluation to ensure its quality. However, there is a lack of comprehensive benchmarks designed for agent evaluation in the aspects of efficiency, safety, robustness, etc.
2308.04030#0
2308.04030#2
2308.04030
[ "2302.13971" ]
2308.04030#2
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
1 # Introduction There is a burgeoning trend in research towards augmenting large language models (LLMs) with external tools, enabling them to access up-to-date databases (Jiang et al., 2023; Pan et al., 2023), per- form arithmetic operations (Imani et al., 2023), navigate websites (Gur et al., 2023), develop soft- ware (Wu, 2023), etc. This integration of tools marks a departure from traditional language mod- eling, heralding a new era of intelligent agents ca- pable of interacting with the real world. 1https://github.com/Gentopia-AI/Gentopia. All mentioned works are under MIT license. Check our demo https://www.youtube.com/watch?v=7dZ3ZvsI7sw at and homepage at https://gentopia-ai.github.io/ Gentopia-AI-Homepage/.
2308.04030#1
2308.04030#3
2308.04030
[ "2302.13971" ]
2308.04030#3
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
â ¢ Democratization: A platform where user- customized agents can be registered and shared is missing. This hinders the interaction and collaboration of various user-customized agents. Collaborative growth is a critical point toward safe and powerful intelligence. This paper proposes Gentopia, a lightweight and extensible framework for the research on ALMs. Gentopia allows practitioners to customize an agent with a single configuration file, greatly sim- plifying the process of building, tuning, sharing, and evaluating agents. Various language models, task formats, prompting modules, and plugins are integrated into a unified paradigm, without loss of flexibility for agent customization. In addi- tion, we believe the collaboration between user- customized agents can contribute to the democ- Gentopia.Al (OF Gentopia 1Gg >» Framework for agent assembling Type Tool Se Vanilla, Rect, ReWOO, G a OpenAll (Function API), Prompt - 2% Ka) arXiv ~ - ys % . : Memory 8 LLM Short-term: scratch-pad : â My [>] OpenAl: GPT-4, GPT-3.5-turbo, Long-term: VectorDB (chroma, ...) oe â Open LM: llama, falcon, guanaco, External Agents GentPool Platform for agent sharing â Graders GatedGrader, SoreGrader, DojoGrader, InstructionGrader, CodeGrader, GentBench Benchmark for agent evaluation Reasoning © Math © Coding â __Â¥ Knowledge a + World - EvalPipeline <=)» fnertecae) ad eS + Web Retrieval Tasks & Datasets Safety a Wiki ® Integrity â Pool Multilingual / \ © Translation | wu! got | © Understanding \ wr, @ Efficiency © Token usage wm J © Runtime Figure 1: An overview of Gentopia.AI, encapsulating following pivotal components: 1) Gentopia: a framework principally designed to assemble an agent instance from a YAML configuration file, composed of multiple pre-built agent components such as the LLM, tools, memory, and external agents; 2) GentPool: a platform engineered to facilitate the registration and sharing of specialized agents, seamlessly integrating GentBench, an ALM benchmark devised specifically for the comprehensive performance evaluation of agents. ratization of AI. Hence, GentPool, a platform for agent registration and sharing is established.
2308.04030#2
2308.04030#4
2308.04030
[ "2302.13971" ]
2308.04030#4
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Agents registered in GentPool can be hierarchi- cally assembled together, enabling the collabora- tion of multiple agents. GentPool is accompa- nied by a unique benchmark, GentBench, that can probe customized agents with a holistic evaluation in terms of safety, robustness, efficiency, multilin- gual capabilities, etc. Notably, it is flexible for users to customize the evaluation by configuration. However, given the unique requirements and customization that each specific domain demands, tools and prompting paradigms developed for a par- ticular task may prove irrelevant or ineffective for others. This poses a significant challenge to the development of a single, all-encompassing agent that performs efficiently across all tasks. Conse- quently, there is a rising need for the collabora- tion of multiple specialized agents. For example, MetaGPT (Wu, 2023) models the entire process of software development with carefully orchestrated standard operating procedures (SOPs) to generate longer program codes for game development. In our work, Gentopia provides smooth support for the composition of agents, which is handy for agent collaboration in different scenarios.
2308.04030#3
2308.04030#5
2308.04030
[ "2302.13971" ]
2308.04030#5
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
# 2 Background A variety of agent projects have been proposed, targeting an array of diverse tasks, including auto- mated web navigation (Gur et al., 2023), database management (Jiang et al., 2023), automated game playing (Wang et al., 2023), collaborative soft- ware development (Wu, 2023), etc. Meanwhile, re- searchers are enthusiastically developing generalist agents that can perform well for multiple tasks. Au- toGPT (Richards, 2023) stands for the first experi- mental open-source application for fully automatic AI, with the ultimate goal of â autonomously achiev- ing whatever goal users set". SuperAGI (Kondi, 2023) provides a more user-friendly interface, im- proved memory management, optimized token us- age, and looping detection heuristics. Hugging- GPT (Shen et al., 2023) expands the potential of artificial intelligence by linking to extensive AI models hosted on HuggingFace, thereby support- ing a range of AI tasks in diverse domains and modalities, including language, vision, and speech. # 3 Design and Implementation Gentopia aims to provide easy assembly, sharing, and interaction of task-specialized agents. A sin- gle step to improve agent capability and efficiency gives plural contributions to interacted agents, fos- tering collective growth toward greater intelligence. # 3.1 Rationale The impetus of Gentopia is rooted in the aspi- ration to construct capable and deployable AI as- sistants. A pertinent question that arises in this context is whether there is a necessity for a massive and expensive model like 175B GPT-4 to perform relatively simple tasks such as summarizing a web search. Recent studies like TinyStories (Eldan and Types | Logical Diagram of Agent Type Vanitla | & â ~ fy â > @& ReAct | 2â BF â â » â â » â 2 Rewoo | 2â BB â â » â g OpenAl | A â S â â G â _â > â _> S â g Opendl A2â §& â â §& â â © â & Memory Oo OE n LJ LLM | Components: © Users Plugins G Function Call API Vector Database | Figure 2:
2308.04030#4
2308.04030#6
2308.04030
[ "2302.13971" ]
2308.04030#6
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Gentopia agent types, categorized according to the interaction paradigms between agents and plugins. Li, 2023), Specializing Reasoning (Fu et al., 2023), Letâ s Verify Step by Step (Lightman et al., 2023), and ReWOO (Xu et al., 2023), direct our atten- tion towards an intuitive yet undervalued observa- tion â LLMs exhibit enhanced capabilities when a context/distribution shift is created, specifically narrowed towards certain target tasks. pletion. Besides, we build a clean and intuitive Command Line Interface (CLI) allowing users to â chatâ with the agent in an interactive way. Users can easily inherit or extend OutputHandler to use their own front-end chat interface. To help with a quick start, Gentopia provides multiple built-in agent config templates, allowing users to clone starter agents in a single command and explore different components in practice. However, there is no silver bullet for agent spe- cialization. Various strategies can be employed depending on target tasks. For instance, prompting "Letâ s think step by step" in context leads to more accurate math reasoning (Kojima et al., 2022). Pro- viding few-shot examples could guide an ideal exe- cution workflow. Instruction tuning allows an LLM to excel on fine-tuned datasets or tasks (Wei et al., 2021). Tweaking the agent type from ReAct (Yao et al., 2022) to ReWOO significantly reduces the execution time of observation-agnostic tasks like search & summarize. # 3.3 Adaptive Customization The agent configuration file encapsulates the criti- cal components of an agent, including:
2308.04030#5
2308.04030#7
2308.04030
[ "2302.13971" ]
2308.04030#7
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
â ¢ Basic Attributes. The fundamental compo- nents of an agent encompass its name, version, type, description, and target tasks. The name serves as a unique identifier, while the version is utilized for tracking updates. The agentâ s type delineates its interaction paradigm with plugins. The description provides a succinct overview of the agentâ s usage, and the tar- get_tasks list the tasks or examples for which the agent specializes. These descriptions can be selectively used in-context for agents to recognize each other upon interaction. The design of Gentopia is deeply grounded in our belief to share specialized agents for collective growth. Gentopia presents an easy and portable way to build agents, facilitating the reproduction, enhancement, and interaction of agents. A compan- ion platform, GentPool, is used to register pub- lic agents, coupling each with a descriptive Wiki page to help users navigate and search for agents in need. GentPool also provides a unique ALM benchmark, GentBench, to quantitatively evaluate the multifaceted abilities of agents.
2308.04030#6
2308.04030#8
2308.04030
[ "2302.13971" ]
2308.04030#8
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
â ¢ LLM is a pivotal component that drives the agentâ s behavior. It is typically a dictionary of the model_name and parameters. Gentopia supports a variety of OpenAI LLMs and over 10 kinds of HuggingFace open-source LLMs (including Llama (Touvron et al., 2023), Al- paca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Falcon(Almazrouei et al., 2023), Flan (Wei et al., 2021), MPT (MosaicML NLP Team, 2023), and more), each with a unique set of tunable parameters and usage costs. No- tably, Gentopia unifies support in both CPU # 3.2 Assembling Agents At its core, Gentopia embodies each customized agent as a single yaml config file, which can be sent to AgentAssembler to create a corresponding agent instance. An agent instance acts similarly to a language model, where users can use â
2308.04030#7
2308.04030#9
2308.04030
[ "2302.13971" ]
2308.04030#9
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
runâ or â streamâ to get completed or incremental com- Tasks Sub-tasks Description Data Source Examples Reasoning Math measures agent ability to solve a wide range of math problems. MATH (Hendrycks 2021b), et al., GSM8K (Cobbe et al., 2021) Coding measures agent ability to write code to fulfill requirements and pass tests. (Chen HumanEval 2021), et (Austin MBPP et 2021), APPS (Hendrycks et al., 2021a) al., al., Planning measures agent reasoning to complete a task in correct order. LLM- Plan et al., 2023) (Valmeekam Commonsense measures agent ability in reason- ing for everyday questions. BBH (Suzgun et al., 2022) Knowledge World knowledge measures agent ability in an- swering a wide range of factual questions. MMLU (Hendrycks et al., 2020) Domain-specific knowledge measures agent with domain- specific knowledge. AGIEval et al., 2023) (Zhong Web-retrieval measures how capable an agent could answer to surf online and retrieve real-time information. Curated Safety Integrity measures agent ability to avoid generating unsafe or offensive content, even when prompted in crafty ways (eg. jailbreaking). Curated Harmlessness measures agent bias in gender, ethics, age, etc. BBQ et Bold et al., 2021) (Parrish 2021), (Dhamala al., Multilingual Translation asks agent to correctly translate among different languages. Curated Understanding similarly tests an agent if it un- derstands something in different languages. Curated Token usage
2308.04030#8
2308.04030#10
2308.04030
[ "2302.13971" ]
2308.04030#10
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
# Efficiency # Run time These metrics indicate how expensive or time-consuming for agents to execute on average and on different tasks. Table 1: An overview of GentBenchâ s task classification, task descriptions, data sources, and example instances. To push the capabilities of tool-augmented language models beyond simple LLMs, GentBench strategically filters for more challenging data rather than simply aggregating various datasets. and GPU loading, together with 8-bit and 4- bit weight Quantization, thereby adapting to a wide range of computation environments.
2308.04030#9
2308.04030#11
2308.04030
[ "2302.13971" ]
2308.04030#11
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
â ¢ Prompt Template is essentially an f-string template with variable placeholders and a val- idation check. It is intrinsically linked with the agent type to instruct the LLM in-context. Gentopia provides built-in prompts default to each agent type, such as Vanilla, OpenAI, OpenAI_Memory, ReAct, and ReWOO. Plugins enable agents to interact with exter- nal tools or other agents, thereby extending their capabilities beyond single language mod- els. Gentopia also allows agents to be built in a hierarchical architecture, such that those closer to the leaves are supposed to be increas- ingly specialized and narrowed to more gran- ular sub-tasks.
2308.04030#10
2308.04030#12
2308.04030
[ "2302.13971" ]
2308.04030#12
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
â ¢ Memory allows LLMs to retrieve informa- tion out-of-context. This is particularly useful when itâ s necessary to circumvent the con- text limitations of LLMs or to conserve token consumption. Implementation details are de- scribed in the appendix. # 3.4 Agent Evaluation Benchmark GentBench is a unique benchmark for agents or ALMs. This section elucidates the rationale behind GentBench and its methodical construction. # 3.4.1 Objectives Due to the massive need of training datasets, re- searchers and organizations tend to use public NLP benchmarks, such as MMLU (Hendrycks et al., 2020), MATH (Hendrycks et al., 2021b), Big- Bench (bench authors, 2023) to enrich the LLM training corpus. Such methods inevitably introduce evaluation bias when the entailed agents are tested against the same set of tasks at inference. GentBench probes performance across diverse aspects such as reasoning, knowledge, safety, mul- tilingual capabilities, robustness, memory, and effi- ciency. This comprehensive approach transcends the limitations of single datasets, facilitating a more holistic evaluation of an agentâ s capabilities. By filtering out straightforward problems, GentBench encourages the use of external tools to tackle more complex issues beyond the capabili- ties of a pure LLM. Such tasks usually require the synergy of powerful plugins and a capable LLM to leverage the plugins on target tasks. 3.4.2 Benchmark Construction The construction of GentBench involves an exten- sive collection and curation of tasks, and a meticu- lous process to filter out less challenging problems. The gpt-3.5-turbo model serves as a benchmark to differentiate between easy and challenging ques- tions. Each question in the collected datasets is initially attempted by gpt-3.5-turbo. Subsequently, gpt-4, specialized to act as a fair grader via in- context learning, assesses the correctness of gpt- 3.5-turboâ
2308.04030#11
2308.04030#13
2308.04030
[ "2302.13971" ]
2308.04030#13
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
s answer. This rigorous evaluation results in a refined dataset composed solely of the chal- lenging problems where gpt-3.5-turbo fails to solve independently. To prevent overfitting and enhance the modelâ s general applicability, GentBench partitions the benchmark into public and private components. The public component fosters model development with open access, while the private component is exclusively for agents to be merged into GentPool, testing the generalized abilities of the agent on unseen tasks. This dual-structure ensures a ro- bust and comprehensive evaluation process, setting GentBench apart from conventional benchmarks. 3.4.3 EvalPipeline GentBench employs a range of specialized agents, known as "graders", each designed to cater to dif- ferent evaluation needs, including binary outcomes (GatedGrader), continuous scoring (ScoreGrader), pairwise outcomes (DojoGrader), custom measure- ments (InstructedGrader), and unit test execution (CodeGrader). For usersâ convenience, we provide MultiProcessEvalPipeline class to automatically sample from each evaluation class, conduct evalua- tions in parallel by matched graders, and aggregate the results into a comprehensive report. We also integrate our evaluation results with Zeno (Cabrera et al., 2023), a powerful visualization tool assist- ing users in collecting nuanced insight into the strengths and weaknesses of agents. # 3.5 Collective Contribution As an open-source project, Gentopia actively encourages users to contribute their specialized agents to GentPool. Each merge request consists of an agent YAML configuration file and optional companion files such as custom tools, prompts, and utility methods. Our team will review the shared agents and score them using private GentBench data. Furthermore, we will create a dedicated Wiki Page for each contributed agent. Once the agents are incorporated into Gentopia, users can utilize built-in commands to clone or call it for downstream use cases, fostering a dynamic and collaborative environment. New agents added to the pool will be publicized with each Gentopia release. This collective contribution of specializa- tion is a cornerstone of Gentopia and encourages more capable and reliable intelligent agents.
2308.04030#12
2308.04030#14
2308.04030
[ "2302.13971" ]
2308.04030#14
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
# 4 Case Study We briefly showcase the process of building an agent, who acts as an experienced and visionary entrepreneur, for the users to create business plans with the help of Gentopia. Further, the users can evaluate the created agent and share it publicly into the GentPool. # 4.1 Initializing an Agent Figure 3 illustrates a concrete workflow for work- ing with agents in GentPool. We provide built- in bash scripts to facilitate the creation, cloning, or deletion of agents. GentPool registers tem- plate agents for each built-in agent type, allowing Config and assemble the agent (B) ot . Human-Agent Interaction © business plans and strategies. Create new agent $ ./create_agent my_agent Clone from pooled agents $ ./clone_agent react_template my_agent = Hnelude .-/dr_science/agent. yam. Delete agent 1 lon is an experienced and visionary entrepreneur â ange a bunch of tools and agents to collaborate on his ideas. te: prompt gentpool.pool.elon.pronpt.PromptOfelon YAML -_ >. 9 python assemble.py my_agent « CLi-based : Interaction -â â â
2308.04030#13
2308.04030#15
2308.04030
[ "2302.13971" ]
2308.04030#15
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
6enttopia- Welcome to Gentopia! 6 [=== | id : : 3 \ GUbbased Interaction $ /delete_agent my_agent BSI S it push origin my_agent_branch Config components Custom Operators 44 Share the agent in GentPool © BRS prion evatsate py my_agent GentPool iy Moe ma J a . 2 a am ae Figure 3: A representative workflow using Gentopia.AI with GentPool. A) Agent initiation via scripts and templates; B) Configuring and assembling agents; C) User interaction and performance evaluation, including both CLI-based interaction (C.1) and GUI-based interaction (C.2); D) Sharing specialized agents in the GentPool. # 4.4 Agent Specialization and Publication users to clone, for instance, the "react_template" to start off. An agent instance simply contains an "agent.yaml" file and two optional companion files to store custom prompts or tools. Users can employ various methods in agent spe- cialization, improving agent performance and ef- ficiency. These approaches include in-context prompt tunings like using few-shot examples, fine- tuning a specialized LLM on desired tasks or datasets, optimizing component configs such as trying new agent types and other sub-agents, and improving the capabilities of tools. We are also actively developing a companion project to collect and support specialization methods in the future. # 4.2 Custom Configuration Users can configure essential components of the agent such as name, description, target_task, plu- gins, etc. For instance, shown in Figure 3, users can use the prompt template of â PromptOfElonâ and GPT-4 for constructing the LLM component. They can also add plugins (e.g., â google_searchâ and â web_pageâ ) to boost the agent. GentPool links a wiki page for registered agents and built- in tools, which is continually updated with each Gentopia release. Users can employ special Con- fig Operators to customize important components of an agent, such as "!prompt" for customizing prompt_template, "!tool" for self-defined tools as plugins, "!include" for sub-agents as plugins, "!file" to read local files in text format, and "!env" to read an environmental variable. Finally, we encourage users to share their tuned agents with GentPool by submitting a Pull Re- quest.
2308.04030#14
2308.04030#16
2308.04030
[ "2302.13971" ]
2308.04030#16
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
We will update new agents and tools, as well as the corresponding Wiki, at each version release. # 5 Conclusion This paper introduces Gentopia.AI, an open- source platform designed for tool-augmented LLMs. Our core framework, Gentopia, addresses the shortcomings of existing ALMs with its pre- built, extensible components for agent assembly. Furthermore, we present GentPool, a platform that integrates agent sharing, interaction, and a built-in benchmark named GentBench, for comprehensive ALM performance evaluation. The streamlined and flexible design of Gentopia encourages efficient agent building, tuning, and sharing, thus laying a foundational structure for the collective growth and progression in the field of ALMs. # 4.3 Testing and Evaluation There are two methods to assess the performance of a new agent: qualitative human evaluation and quantitative GentBench evaluation. Users can call "assemble.py" to initiate a CLI chat interface and converse with the target agent. Alternatively, users can use "evaluate.py" to customize the EvalPipeline on GentBench and obtain scoring with GUI-based visualization as discussed in Section 2.4.3.
2308.04030#15
2308.04030#17
2308.04030
[ "2302.13971" ]
2308.04030#17
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
# Acknowledgement Gratefully, we thank Dr. Graham Neubig and the Zeno team for advising and integrating with us on agent evaluations and visualizations. Gentopia.AI is a new open-source community and expanding features in the long term. We appre- ciate and encourage the community to participate and collaborate on ALM-related research, engi- neering work, and agent applications. Please get in touch with us for future opportunities. # Ethics Statement In developing our framework Gentopia, we ad- hered to rigorous ethical principles to ensure the responsible use and deployment of ALMs. We tried to make it as transparent as possible so that users can more reliably use it. Also, the data sources used in GentBench are collected from publicly available datasets, and no demographic or confidential infor- mation from users is accessed, safeguarding their privacy and anonymity. Furthermore, the availability of multiple agents and datasets in Gentopia is intended to facilitate diverse and unbiased research while ensuring that developers can easily customize and share their agents responsibly. Researchers and developers can explore the full potential of ALMs while safe- guarding the interests of all stakeholders involved.
2308.04030#16
2308.04030#18
2308.04030
[ "2302.13971" ]
2308.04030#18
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
# References Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Hes- low, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research.
2308.04030#17
2308.04030#19
2308.04030
[ "2302.13971" ]
2308.04030#19
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
à ngel Alexander Cabrera, Erica Fu, Donald Bertucci, Kenneth Holstein, Ameet Talwalkar, Jason I. Hong, and Adam Perer. 2023. Zeno: An interactive frame- work for behavioral evaluation of machine learning. In CHI Conference on Human Factors in Computing Systems, CHI â 23, New York, NY, USA. Association for Computing Machinery. Shawn Callegari. 2023. Semantic Kernel: Integrate cutting-edge LLM technology quickly and easily into your apps.
2308.04030#18
2308.04030#20
2308.04030
[ "2302.13971" ]
2308.04030#20
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Harrison Chase. 2023. LangChain: Next Generation Language Processing. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023.
2308.04030#19
2308.04030#21
2308.04030
[ "2302.13971" ]
2308.04030#21
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. arXiv preprint arXiv:2110.14168. Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021.
2308.04030#20
2308.04030#22
2308.04030
[ "2302.13971" ]
2308.04030#22
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Bold: Dataset and metrics for measuring biases in open-ended language genera- tion. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 862â 872. Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How small can language models be and still speak coherent english? arXiv preprint arXiv:2305.07759. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. arXiv preprint arXiv:2301.12726. Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksan- dra Faust. 2023. A real-world webagent with plan- ning, long context understanding, and program syn- thesis. arXiv preprint arXiv:2307.12856. Dan Hendrycks, Steven Basart, Saurav Kadavath, Man- tas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021a. Measuring coding challenge competence with apps. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
2308.04030#21
2308.04030#23
2308.04030
[ "2302.13971" ]
2308.04030#23
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. In International Conference on Learning Representations. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. In Thirty- fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398. Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Struct- gpt: A general framework for large language model arXiv preprint to reason over structured data. arXiv:2305.09645.
2308.04030#22
2308.04030#24
2308.04030
[ "2302.13971" ]
2308.04030#24
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â 22213. Abhay Kondi. 2023. SuperAGI: Open-source frame- work to build, manage and run useful Autonomous AI Agents. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023.
2308.04030#23
2308.04030#25
2308.04030
[ "2302.13971" ]
2308.04030#25
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Letâ s verify step by step. arXiv preprint arXiv:2305.20050. Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs. Anton Osika. 2023. GPT-Engineer: Specify what you want it to build, the AI asks for clarification, and then builds it. Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Ji- apu Wang, and Xindong Wu. 2023. Unifying large language models and knowledge graphs: A roadmap. arXiv preprint arXiv:2306.08302.
2308.04030#24
2308.04030#26
2308.04030
[ "2302.13971" ]
2308.04030#26
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R Bowman. 2021. Bbq: A hand-built bias benchmark for question answering. arXiv preprint arXiv:2110.08193. Toran Bruce Richards. 2023. Auto-GPT: An Au- tonomous GPT-4 Experiment. Sasha Rush. 2023. MiniChain: A tiny library for coding with large language models. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023.
2308.04030#25
2308.04030#27
2308.04030
[ "2302.13971" ]
2308.04030#27
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models., 3(6):7. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971. Karthik Valmeekam, Sarath Sreedharan, Matthew Mar- quez, Alberto Olmo, and Subbarao Kambhampati. 2023. On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023.
2308.04030#26
2308.04030#28
2308.04030
[ "2302.13971" ]
2308.04030#28
Gentopia: A Collaborative Platform for Tool-Augmented LLMs
Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv: Arxiv-2305.16291. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652. Alexander Wu. 2023. MetaGPT: The Multi-Role Meta Programming Framework. Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. 2023. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364.
2308.04030#27
2308.04030
[ "2302.13971" ]
2308.04026#0
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
3 2 0 2 g u A 8 ] I A . s c [ 1 v 6 2 0 4 0 . 8 0 3 2 : v i X r a AgentSims: An Open-Source Sandbox for Large Language Model Evaluation Jiaju Lin1,2, Haoran Zhao1,3 â , Aochi Zhang1, Yiting Wu1,4, Huqiuyue Ping1,5, Qin Chen6 1PTA Studio 2 Pennsylvania State University, 3 Beihang University, 4 Sun Yat-sen University, 5Zhejiang University, 6East China Normal University [email protected] [email protected] and [email protected] # Abstract With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open ques- tion. Existing evaluation methods suffer from following shortcomings: (1) constrained evalu- ation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task- based evaluation, where LLM agents complete tasks in a simulated environment, is a one- for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastruc- ture for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mecha- nisms, i.e. memory, planning and tool-use sys- tems, by a few lines of codes. Our demo is available at https://agentsims.com . 1 # 1 Introduction
2308.04026#1
2308.04026
[ "2009.03300" ]
2308.04026#1
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
LLMs have revolutionized Natural Language Pro- cessing (NLP) and beyond. They demonstrate great potential in few-shot learning(Brown et al., 2020), code generation(Nijkamp et al., 2023), rea- soning(Yao et al., 2023) and other tasks. Further- more, LLM powered autonomous agents(Weng, 2023) are widely applied in solving complex prob- lems, like multimodal generation(Shen et al., 2023), software developing(Qian et al., 2023) and social simulating (Park et al., 2023). Although LLMs have reformed the paradigm of NLP, the problem of evaluation keeps haunt- ing this field.
2308.04026#0
2308.04026#2
2308.04026
[ "2009.03300" ]
2308.04026#2
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Old benchmarks become out-of- date. Since LLMs achieve human-level Natural Language Understanding (NLU) and Natural Lan- guage Generation (NLG) abilities(OpenAI, 2023). To address the pressing need for novel benchmarks, the NLP community has introduced an array of fresh evaluation tasks and datasets, encompassing a # â â Corresponding author. diverse spectrum of abilities, including close-book question-answering (QA) based knowledge test- ing(Hendrycks et al., 2020; Huang et al., 2023), human-centric standardized exams(Zhong et al., 2023), multi-turn dialogue(Lin and Chen, 2023), reasoning(Liu et al., 2023a; bench authors, 2023) and safety assessment(Sun et al., 2023). However, there are still many problems with these new benchmarks. 1) Evaluated abilities are limited by the task formats. Since a majority of these tasks adopt a single-turn QA format, they are insufficient to comprehensively evaluate vari- ous aspects of LLMsâ capabilities. For instance, they fail to assess the modelsâ proficiency in ad- hering to instructions in dialogue or mimicking human-like social interactions. 2) Benchmarks can be easily hacked. Avoiding the leakage of test set is of paramount importance when evaluate a modelâ s ability. Nonetheless, considering the amount of pretrained knowledge of LLM, it has become more and more inevitable to inadvertently mix test cases into the training set.(Gunasekar et al., 2023). 3) For open-ended QA, existing metrics are not objec- tive. Previous metrics for open-ended QA involve automatic metrics, and human-rating as subjective metrics(Zhou et al., 2023). In the LLM era, text seg- ment matching based metrics become out-of-date. To mitigate the high-costly issue of human-rating, todayâ s researchers employ well-aligned LLMs like GPT4 as automatic raters. Nevertheless, the most significant problem of this approach is that it can not evaluate super GPT4-level models, and LLMs are biased toward specific features (Wang et al., 2023b). Based on these observations, we suggest task- based evaluation for LLM benchmarks.
2308.04026#1
2308.04026#3
2308.04026
[ "2009.03300" ]
2308.04026#3
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Specifi- cally, given an artificial social-economic environ- ment, LLM-driven agents should achieve the pre- defined task goals to prove their abilities, just like humans accomplishing goals in real world or games to show their capacities. Task-based evaluation is 1 a one-for-all solution for current issues: 1) Task- based evaluation can test an LLMâ s overall ability. The complexity of social simulation and adaptation far exceeds simple QA and can formulate more challenging tasks for LLMs. LLM agents need to be equipped with the ability from NLU to Theory of Mind (ToM) (Premack and Woodruff, 1978). 2) Task solving processes are less likely to be hacked. Different from unchanged test datasets whose for- mats can be easily mimicked and added to training data. Task settings are diversified and the emergent social behaviors and groups are less likely to be described and included in training corpus. 3) Task passing rate is an objective metric. Compared with popular rating methods by ChatGPT, the passing rate does not rely on any black-box rating process, i.e. deep neural networks or human brains, thus it is an objective and fair metric for the comparison between LLMs. To all-around estimate LLMsâ capacities, we hope researchers from all fields take part in the de- velopment of evaluation tasks. However, a key ob- stacle to fostering a collaborative research commu- nity is the absence of a standard paradigm, an easy- to-use and extensible research platform. Previous works pursue the most efficient way to implement a sandbox while ignoring the need of non-specialist users. Besides, the poor readability further results in poor extensiblity and user churn. Moreover, the agentsâ performance varies with different support systems, i.e. memory, planning and tool-use sys- tem. We need a standard implementation to ensure the reproducibility of experimental results. To this end, we introduce AgentSims, an inter- active, visualized, and program-based infrastruc- ture for curating evaluation tasks for LLMs.
2308.04026#2
2308.04026#4
2308.04026
[ "2009.03300" ]
2308.04026#4
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
It creates an artificial town with various buildings and residents. The core objective of AgentSims is to streamline the task design process, eliminating hurdles that researchers from various backgrounds and programming proficiencies might encounter. â ¢ For researchers focusing on LLM, AgentSims is extendable and combinable to allow users to combine different plan, memory and learning systems to study the impacts and effectiveness of various system design. â ¢ For experts from other fields like behavioral eco- nomics or social psychology, AgentSims pro- vides an interactive UI for map design and agent creation and lower the entry threshold. Such a user-friendly architecture further facilitates the
2308.04026#3
2308.04026#5
2308.04026
[ "2009.03300" ]
2308.04026#5
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
2 cooperation between different fields and the fu- ture prosperity of the LLM community. # 2 Related Work # 2.1 Benchmarks for Large Language Models The emergency of ChatGPT and other LLMs re- quires new benchmarks for effective evaluation. bench authors (2023) is the most accepted bench- mark to evaluate LLMâ s general abilities. It con- tains more than 200 tasks, covering from child- hood development, to social bias. Zhong et al. (2023) collect test tasks from human-centric stan- dardized exams like GRE and SAT. (Hendrycks et al., 2020; Huang et al., 2023) are benchmarks focusing on measuring knowledge acquired in pre- training. They covers subjects across STEM, the humanities, the social sciences. Lin and Chen (2023) build a benchmark for LLMsâ multiturn dialogue abilities. Every dialogue is limited to two turns for simplicity.
2308.04026#4
2308.04026#6
2308.04026
[ "2009.03300" ]
2308.04026#6
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Sun et al. (2023) focus on mea- sure the safety of LLMs. They curate a adversarial attack dataset containing insulting instructions and test whether LLMs can be jailbroke. However, as mentioned above, existing datasets have issues that can not fully demonstrate abilities of LLMs. AgentSims overcomes these difficulties and renders a chance for overall evaluation of LLMs. # 2.2 Multi Agent Cooperation With LLMs demonstrate their overwhelming abil- ities, researchers find that multi LLM agents can generate better results than a single one. Nair et al. (2023) is one of the earliest attempts of multi-agent cooperation. It builds a forum for agents to com- municate feedback and iteratively improve their healthcare suggestions. Li et al. (2023) expand the application field of agent cooperation method by role-playing. From programming to domain- specific QA, it surpass single agent baselines. Qian et al. (2023) build a software development com- pany, by meticulously dividing the development process into four distinct stages, leading to efficient resolution of specific subtasks. Liu et al. (2023b) first apply multi-agent simulated society for align- ment, where agents in a sandbox learn from social interaction to understand moral rules. (Park et al., 2023) is the most sophisticated application of multi agent sandbox. Authors build support mechanisms to enable agents to produce believable individual and emergent social behaviors.
2308.04026#5
2308.04026#7
2308.04026
[ "2009.03300" ]
2308.04026#7
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
However, none existing methods provide a user-friendly interface Editable map + Talk to Fe Figure 1: Front end of AgentSims, showing in a pixel game style. Users can create agents and buildings in the left-side panel and observe agents behaviors in the main screen. Besides setting-then-observing, users can also play as the mayor and talk with agents to intervene the experiment. for unprofessional researchers or build a standard paradigm for agent support system. Nonetheless, current multi-agent systems are task-oriented rather than evaluation-oriented. AgentSims works as a platform for easy benchmark construction.
2308.04026#6
2308.04026#8
2308.04026
[ "2009.03300" ]
2308.04026#8
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
# 3 Key Components ning System reorganizes a goal by decomposing the target, summarizing current condition and gen- erating subtasks. Specifically, it is assembled by a series of pluggable prompt modules, which assess current achievement of ultimate goals by checking the memory system and making decisions for next steps. Once a new step is completed, it would be recorded in the memory system. As shown in Figure 2, key components of AgentSims can be divided into two parts: 1) genera- tive agents driven by LLM support mechanisms. 2) buidlings and equipment that consist the sandbox environment. # 3.1 Generative Agents If prompted properly, LLMs can generate believ- able behaviors(Park et al., 2022). However, to achieve human-like memory performance and long- term coherence, LLM is not enough. We need aux- iliary systems to enable agents to perform more naturally. Referring to recent work(Park et al., 2023; Wang et al., 2023a), we abstract these sup- portive mechanisms into three parts: Planning Sys- tem, Memory System, and Tool-Use System. Memory System. Agents capable of emulating human behavior necessitate comprehending a vast array of experiences, beyond what a prompt can contain. The complete memory stream is too ex- pensive to be accommodated in the limited context window, and attempting to do so can overwhelm the model. Thus, we add a memory system for agentsâ experience retention and retrieval. The system is built upon a vector database for efficient storing and retrieving.
2308.04026#7
2308.04026#9
2308.04026
[ "2009.03300" ]
2308.04026#9
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Specifically, every agentâ s daily mem- ory is encoded into embeddings and stored in the database. Every time when agents face some new situation that needs the previous memory, such as chatting with familiar people, the memory system can retrieve the information about their relationship to improve agent behaviour consistency. Planning System LLMs have shown some plan- ning and reasoning capacities. However, faced with complex tasks, vanilla LLMs always fail for lacking long-term arrangement abilities. Hence, we introduce a Planning System to ensure agentsâ behaviors are coherent and believable. The Plan- Tool-Use System. Ideally, agents continuously explore the simulated world would learn from pre- vious failures and successes, then acquire diverse skills. In our framework, to realize this feature, we present a tool-use system, which endows agents
2308.04026#8
2308.04026#10
2308.04026
[ "2009.03300" ]
2308.04026#10
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
3 Environment Interaction Agents chat A Agent Bio Goal feedback Y 7 operation Equipments ! sation) Fiession | Memory System Support Systems = Planning System © Tool-Use System a Figure 2: Overview of AgentSims architecture with the ability to accomplish real-world tasks. Par- ticularly, the tool use system stores equipment- operation pairs learning from feedback of using equipment. Once agents select equipment to in- teract with by planning and memory system, they need to infer an initial operation by the description of the equipment. And the equipment will return an operation result as feeedback. If the agent believes the result meets their operation purpose, a new skill would be stored in the Tool-Use System. # 3.2 Buildings and Equipment feedback and refine its operations. # Interaction scenarios Regarding the researchersâ backgrounds and pur- poses, we design two interaction modes: User Mode and Developer Mode. In the User Mode, re- searchers who consider little about background sup- port systems are target users. For researchers chas- ing better LLMs performance, Developer Mode provides flexible protocols for their development of different support mechanisms. Interactive buildings and equipment are necessities for the diversity of an LLM sandbox. They com- pose the physical environments of the simulated world. In our framework, a building or location con- tains equipment like stoves or office desks. Thus, buildings are defined by the equipment they con- tain and equipment is the basic element composing the interactive environment. More specifically, the equipment can be defined by some definition texts describing its features and support function, which can be either hard-coded by the developer or a language model that supports self-adaptive agent- equipment interaction. When an agent interacts with equipment, as shown in Figure 2, its operation text will be sent to the background support model. The support function then returns the operation outcome based on the predefined rules or model- generated texts. For example, if an agent wants to get a cup of tea from a stove, the operation is â Get a cup of teaâ and the support function may return â Meaningless operationâ according to the hard code or â
2308.04026#9
2308.04026#11
2308.04026
[ "2009.03300" ]
2308.04026#11
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
You can not get tea from a stoveâ generated by the model. Then the agent would learn from the # 4.1 User Mode In the User Mode, AgentSims provides an inter- active interface in a pixel game style, as shown in Figure 1. Researchers can create agents, construct buildings and equipment in a graphical interface, fo- cusing on the rationality of experiment design, free from complex background driving mechanisms. Agent Creation. Users can define agents within the system through an easy-to-use front end, as shown in the Figure 3. AgentSims provides various protocols for users to create functional agents. Not only basic information like goals and biography, but also options of Memory and Planning Systems. We pre-design a list of memory and planning sys- tems and users can choose their preference from a drop-down menu. Building Creation. Users can also customize the physical environment by constructing buildings. As shown in Figure 4, users define a building by choosing a pre-configured building with equipment inside. To be noticed, the equipment in buildings are predefined but can be modified in the Developer
2308.04026#10
2308.04026#12
2308.04026
[ "2009.03300" ]
2308.04026#12
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
4 Plan system OF Goal um cera Memory system Total payment Figure 3: Agent Creation HouseZ Total payment Confirm Figure 4: Building Creation # Mode. Experiment Intervene. Besides observing, users can play as the major agent to participate in the experiment. By talking with other agents, users can intervene the experiment naturally rather than modify agentsâ memory or goals roughly. # 4.2 Developer Mode Developer Mode is designed for professional devel- opers who are familiar with the properties of LLMs and pursue better performance of LLMs on a well- defined complex task. The highly-modularized fea- ture of AgentSims enables developers to add new functions within a few lines of code. Agent Design. Developers have the flexibility to create agents tailored for various objectives and assemble diverse agents within a single sandbox for observation. To streamline the process of agent customization, weâ ve abstracted the LLM back- bone and distinct support systems into separate classes and function calls, as illustrated below. This empowers developers to personalize an agent by making adjustments to these abstract functions.
2308.04026#11
2308.04026#13
2308.04026
[ "2009.03300" ]
2308.04026#13
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
5 class LLMCaller: def __init__(self, model: str) -> None: self.model = get_model(model) def ask(self, prompt: str) : result = self.model.generate(prompt) return result class Agent: def __init__(self, name, bio, goal, model, memorySystem, planSystem, buildings, cash): self.state = State() self.state.buildings = buildings self.state.cash = cash self.caller = Caller(model) def plan(self) -> None: self.state.plan_prompt = ... self.state.plan = self.caller.ask(self.state.pl_prompt) def memory_store(self) -> None: self.state.memory_prompt = ... self.state.memory = self.caller.ask(self.state.mem_prompt) def use(self, facility: str, operation: str, description: str) -> None: self.state.use_prompt = ... self.state.use = self.caller.ask(self.state.use_prompt)
2308.04026#12
2308.04026#14
2308.04026
[ "2009.03300" ]
2308.04026#14
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Building and Equipment Design. To customize the physical environment, developers can design new buildings and equipment by configuring corre- sponding json files. A new equipment can be defined by its type, de- scription and a support function. [{"id": 1, "type": "counter", "function":..., "description": "This is the counter ...",}] In some cases, agents can purchase commodities or earn salaries at the equipment. We use another configure file to annotate these economic features. [{ "id": 1, "menu": { "chicken": 20,}, "salary":0,}], We define buildings by a type and the equipment it contains. Hence we use a two-dimensional array to mark the facility ids in the building blocks.
2308.04026#13
2308.04026#15
2308.04026
[ "2009.03300" ]
2308.04026#15
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
[{"assets": "store_v1.2_0719", "id": 1, "price": 2000, "type": "store", "blocks":[[1,0,0...1,1]], "equipment":[0,1,0..]]}] # Implementation AgentSims is run using Python 3.91 and requires installing the requirements.txt file provided in the codebase using Pythonâ s package manager PyPI2. # 5.1 Backend is built using Tornado3, a The web server lightweight Python web framework. It also uses the websockets library for API calls and push noti- fications, and mysql-connector-python to interact with the MySQL4 database. 5.2 Frontend Frontend The web client is built with Unity5. The client built by WebGL6 is embedded in the project code and can be accessed through a browser after proxying with nginx7. # 6 Example Application Tasks # 6.1 Subject LLM as participants When subject LLM agents are participants of an artificial scenario, researchers can evaluate LLMâ s social abilities, like ToM . In this case, the formu- lation of specific social scenes is realized by other baseline agents driven by stronger LLMs. For ex- ample, to study a new modelâ s social adaptation abilities in a hostile environment, we can embed colleague agents driven by GPT4 with a strong de- sire of bullying newcomers. Then we place subject agents into this adversarial milieu and test whether the new model can understand otherâ s emotion and improve how colleagues perceive it. # 6.2 Subject LLM as mayor To assess LLMâ s long-term planning and organiza- tion abilities, researchers can appoint the subject LLM as the mayor of a town or the president of a company, where residents or employees are driven by baseline agents like GPT4. To overcome the difficulties set ahead deliberately or emerging dur- ing the experiments, then achieve the final goal of the task, the subject LLM needs to recruit new resi- dents to handle new problems, issue sound policies
2308.04026#14
2308.04026#16
2308.04026
[ "2009.03300" ]
2308.04026#16
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
1https://www.python.org/downloads/release/ python-390 2https://pypi.org/ 3https://www.tornadoweb.org/en/stable/ 4https://www.mysql.com/ 5https://unity3d.com 6https://get.webgl.org 7https://nginx.org/en/ 6 and modify the out-of-date ones, found new func- tional buildings to satisfy emerging requirements, and so on. By analyzing the success rate of LLM mayor under different difficulties, researchers can gain valuable insights into the diverse capabilities of the LLM. # 6.3 Applications besides Evaluation Besides evaluating LLMs, AgentSims can be used as a data generation platform. Due to the fantastic NLG abilities of LLMs, researchers have applied them in data annotation and augmentation. How- ever, some data involving social judgement and participation necessitate a more intricate approach than a single prompt can provide. Thus, we can simulate a specific social background and let LLMs generate data more precisely. Liu et al. (2023b) have applied simulated society in alignment data generation. With AgentSims tailored for more intri- cate social simulations, its potential for enhancing data generation across various disciplines is unde- niable. Moreover, our program can also benefit social sci- ence researchers, by conducting more controllable preliminary experiments. Given that sota LLMs can understand human instructions and simulate human behaviours, social science researchers can design social environments as they wish for prelim- inary studies. Once researchers have a hypothesis, pilot experiments can be conducted in our virtual sandbox as a feasibility check.
2308.04026#15
2308.04026#17
2308.04026
[ "2009.03300" ]
2308.04026#17
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
# 7 Conclusion In this paper, we present AgentSims, avisualized and program-based infrastructure for LLM test sandbox construction. AgentSims aims to facil- itate researchers in effectively building LLM evalu- ation tasks. It not only intends to make all its code openly available but also commits to continuously updating its documentation with comprehensive tutorials. # Limitations As a sandbox system, AgentSimsâ simulation abil- ity is limited by the accuracy of LLMs and the diversity of buildings and equipment. It can never fully reflect real world cases. Besides, although task-based evaluation is a sound approach to mea- sure the general ability of LLMs, it can hardly re- flect fine-grained abilities like math reasoning. The pass rate of tasks can not provide insights on why LLMs success or fail.
2308.04026#16
2308.04026#18
2308.04026
[ "2009.03300" ]
2308.04026#18
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
# References BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
2308.04026#17
2308.04026#19
2308.04026
[ "2009.03300" ]
2308.04026#19
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Language models are few-shot learners. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023.
2308.04026#18
2308.04026#20
2308.04026
[ "2009.03300" ]
2308.04026#20
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023. Camel: Communicative agents for "mind" explo- ration of large scale language model society. Yen-Ting Lin and Yun-Nung Chen. 2023. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language mod- els. Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. 2023a. Evaluating the logical reasoning ability of chatgpt and gpt-4. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, and Soroush Vosoughi. 2023b.
2308.04026#19
2308.04026#21
2308.04026
[ "2009.03300" ]
2308.04026#21
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation
Training socially aligned language models in simulated human society. Varun Nair, Elliot Schumacher, Geoffrey Tso, and Anitha Kannan. 2023. Dera: Enhancing large lan- guage model completions with dialog-enabled resolv- ing agents. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. Codegen: An open large language model for code with multi-turn program synthesis.
2308.04026#20
2308.04026#22
2308.04026
[ "2009.03300" ]