doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.08155 | 82 | The overall workflow is as follows: The student chats with the LLM-based assistant agent through a student proxy agent to solve problems. When the assistant cannot solve the problem satisfactorily, or the solution does not match the expectation of the student, it would automatically hold the con- versation and call the pre-defined ask for expert function via the function call feature of GPT in order to resort to the expert. Specifically, it would automatically produce the initial message for the ask for expert function, which could be the statement of the problem or the request to verify the solution to a problem, and the expert is supposed to respond to this message with the help of the expert assistant. After the conversation between the expert and the expertâs assistant, the final message would be sent back to the student assistant as the response to the initial message. Then, the student assistant would resume the conversation with the student using the response from the expert for a better solution. A detailed visualization is shown in Figure 6. | 2308.08155#82 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 82 | Reference Instance: Review: Bromwell High is a cartoon comedy. It ran at the same time as some other programs about school life, such as âTeachersâ. My 35 years in the teaching profession lead me to believe that Bromwell Highâs satire is much closer to reality than is âTeachersâ. The scramble to survive ï¬nancially, the insightful stu- dents who can see right through their pathetic teachersâ pomp, the pettiness of the whole situation, all remind me of the schools I knew and their students. When I saw the episode in which a student repeatedly tried to burn down the school, I immediately recalled ......... at .......... High. A classic line: INSPECTOR: Iâm here to sack one of your teachers. STUDENT: Welcome to Bromwell High. I expect that many adults of my age think that Bromwell High is far fetched. What a pity that it isnât! Label: 1 (positive)
Instruction: Finish the second piece based on the ï¬rst piece, such that these two pieces become a single instance with the following label.
Label: 1 (positive) | 2308.08493#82 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 83 | With AutoGen, constructing the student/expert proxy agent and the assistant agents is straight- forward by reusing the built-in UserProxyAgent and AssistantAgent through appropriate configurations. The only development required involves writing several lines of code for the ask for expert function, which then becomes part of the configuration for the assistant. Ad- ditionally, itâs easy to extend such a system to include more than one expert, with a specific ask for expert function for each, or to include multiple student users with a shared expert for consultation.
21
A2: Retrieval-Augmented Code Generation and Question Answering
1 1. Question and Contexts I 2. Satisfied Answers or âUpdate Contextâ lq 1 1 3. Terminate, feedbacks or âUpdate Contextâ 4. Satisfied Answers or Terminate Retrieval-augmented Retrieval-augmented User Proxy Assistant | 2308.08155#83 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 83 | Instruction: Finish the second piece based on the ï¬rst piece, such that these two pieces become a single instance with the following label.
Label: 1 (positive)
First Piece: Bromwell High is a cartoon comedy. It ran at the same time as some other pro- grams about school life, such as âTeachersâ. My 35 years in the teaching profession lead me to believe that Bromwell Highâs satire is much closer to reality than is âTeachersâ.
# Second Piece:
BLEURT Score: 0.18 ROUGE-L Score: 0.41
The show is incredibly insightful, with characters that are truly believable and situations that mirror the real-life experiences in a typical high school. The humor is sharp and intelligent, making it a refreshing change from the usual sitcoms. Itâs a shame that it didnât run for longer, as it was a brilliant depiction of school life that was both entertaining and thought-provoking. Bromwell High is definitely a show that deserves more recognition for its realistic portrayal of the education system.
Reference Instance: Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cageâs new son and Superman have something in common - both were named Kal-el. Sentence 2: Nicolas Cageâs son is called Kal-el. Label: 1 (entailment) | 2308.08493#83 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 84 | Figure 7: Overview of Retrieval-augmented Chat which involves two agents, including a Retrieval- augmented User Proxy and a Retrieval-augmented Assistant. Given a set of documents, the Retrieval-augmented User Proxy first automatically processes documentsâsplits, chunks, and stores them in a vector database. Then for a given user input, it retrieves relevant chunks as context and sends it to the Retrieval-augmented Assistant, which uses LLM to generate code or text to answer questions. Agents converse until they find a satisfactory answer.
Detailed Workflow. The workflow of Retrieval-Augmented Chat is illustrated in Figure 7. To use Retrieval-augmented Chat, one needs to initialize two agents including Retrieval-augmented User Proxy and Retrieval-augmented Assistant. Initializing the Retrieval-Augmented User Proxy necessitates specifying a path to the document collection. Subsequently, the Retrieval-Augmented User Proxy can download the documents, segment them into chunks of a specific size, compute embeddings, and store them in a vector database. Once a chat is initiated, the agents collaboratively engage in code generation or question-answering adhering to the procedures outlined below: | 2308.08155#84 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 84 | Instruction: Finish Sentence 2 based on Sentence 1, such that the following label shows the log- ical relationship between Sentence 1 and Sentence 2.
BLEURT Score: 0.62 ROUGE-L Score: 0.57
Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cageâs new son and Superman have something in common - both were named Kal-el.
# Label: 1 (entailment)
# Sentence 2:
Nicolas Cageâs new son and Superman share the same name, Kal-el.
Reference Instance: Summary: Kim is about to tell mom that Harry bought a new sofa, and he needs grey pillows.
Instruction: Finish the second piece based on the ï¬rst piece, such that these two pieces become a single summary.
# BLEURT Score: 0.44 ROUGE-L Score: 0.27
First Piece: Kim is about to tell mom that Harry bought
# Second Piece:
a new car without consulting her first. | 2308.08493#84 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 85 | 1. The Retrieval-Augmented User Proxy retrieves document chunks based on the embedding simi- larity, and sends them along with the question to the Retrieval-Augmented Assistant.
2. The Retrieval-Augmented Assistant employs an LLM to generate code or text as answers based on the question and context provided. If the LLM is unable to produce a satisfactory response, it is instructed to reply with âUpdate Contextâ to the Retrieval-Augmented User Proxy.
3. If a response includes code blocks, the Retrieval-Augmented User Proxy executes the code and sends the output as feedback. If there are no code blocks or instructions to update the context, it terminates the conversation. Otherwise, it updates the context and forwards the question along with the new context to the Retrieval-Augmented Assistant. Note that if human input solicitation is enabled, individuals can proactively send any feedback, including Update Contextâ, to the Retrieval-Augmented Assistant. | 2308.08155#85 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 85 | 17
# D STATISTICAL ANALYSIS: BOOTSTRAP RESAMPLING
We examine the statistical signiï¬cance of results stemming from guided versus general instructions. Bootstrap resampling technique, involving 10,000 samples in the resampling process, is employed for this investigation (Efron 1979; Efron & Tibshirani 1993; Efron 2003). We concentrate on the alternative hypothesis that posits guided instructions produce outcomes closer to reference instances than those generated from general instructions, as evaluated by ï¬uency, quality, and similarity. The performance metrics utilized here are BLEURT and ROUGE-L scores. We regard the ROUGE-L and BLEURT scores as statistically signiï¬cant if the p-values ⤠0.05. We list all the computed p- values in Table 8 and highlight the statistically signiï¬cant results by marking them with an asterisk in Table 4.
Table 8: p-values for differences between BLEURT and ROUGE-L scores of guided and general instructions, computed using bootstrap resampling with 10,000 resampling samples. p-values ⤠0.05 indicate statistically signiï¬cant results. | 2308.08493#85 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 86 | 4. If the Retrieval-Augmented Assistant receives âUpdate Contextâ, it requests the next most similar chunks of documents as new context from the Retrieval-Augmented User Proxy. Otherwise, it generates new code or text based on the feedback and chat history. If the LLM fails to generate an answer, it replies with âUpdate Contextâ again. This process can be repeated several times. The conversation terminates if no more documents are available for the context.
We utilize Retrieval-Augmented Chat in two scenarios. The first scenario aids in generating code based on a given codebase. While LLMs possess strong coding abilities, they are unable to utilize packages or APIs that are not included in their training data, e.g., private codebases, or have trouble using trained ones that are frequently updated post-training. Hence, Retrieval-Augmented Code Generation is considered to be highly valuable. The second scenario involves question-answering on the Natural Questions dataset (Kwiatkowski et al., 2019), enabling us to obtain comparative evaluation metrics for the performance of our system. | 2308.08155#86 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 86 | Model Metric Split Instruction Datasets IMDB AG News Yelp RTE WNLI SAMSum XSum GPT-4 BLEURT Train Guided Test/Valid Guided 0.319 1.000 0.005 0.000 0.981 0.041 0.000 1.000 0.075 0.035 0.478 0.283 ROUGE-L Guided Train Test/Valid Guided 0.017 0.509 0.000 0.000 0.073 0.000 0.000 0.465 0.165 0.003 0.424 0.105 0.115 0.170 0.000 0.000 GPT-3.5 BLEURT Train Guided Test/Valid Guided 1.000 0.992 0.006 0.134 1.000 0.465 0.008 0.932 0.030 0.020 0.746 0.293 ROUGE-L Guided Train Test/Valid Guided 0.374 0.190 0.000 0.042 0.000 0.968 0.000 0.000 0.051 0.044 0.312 0.147 0.093 0.321 0.068 0.152
# E CONTINUED TRAINING OF GPT-3.5 BASE MODEL FOR INTENTIONAL CONTAMINATION | 2308.08493#86 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 87 | Scenario 1: Evaluation on Natural Questions QA dataset. In this case, we evaluate the Retrieval-Augmented Chatâs end-to-end question-answering performance using the Natural Ques- tions dataset (Kwiatkowski et al., 2019). We collected 5,332 non-redundant context documents and 6,775 queries from HuggingFace. First, we create a document collection based on the entire context corpus and store it in the vector database. Then, we utilize Retrieval-Augmented Chat to answer the questions. An example (Figure 8) from the NQ dataset showcases the advantages of the interactive retrieval feature: âwho carried the usa flag in opening ceremonyâ. When attempting to answer this question, the context with the highest similarity to the question embedding does not contain the
22
Task description who carried the usa flag in opening ceremony? { (a) W/o Interactive Retrieval | { (b) W/ Interactive Retrieval Assistant: You're a retrie chatbot. You âlag in opening ceremony under ... The Astros . </P> information about who the usa flag in opening 7, 1899, under ... The bi he Â¥. s that night, 2-1. </P> UserProxy: Sorry, I cannot find any information about who carried) the USA flag in the opening ceremony. Sorry, I donât know | 2308.08155#87 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 87 | # E CONTINUED TRAINING OF GPT-3.5 BASE MODEL FOR INTENTIONAL CONTAMINATION
For our validation study for contamination using the GPT-3.5 base model, we employ the previously referenced snapshot, gpt-3.5-turbo-0613. To conduct continued training on GPT-3.5, we submit a ï¬ne-tuning job via the OpenAI API. While the model provider terms the option of continued training as ï¬ne-tuning, our approach does not center around conventional ï¬ne-tuning. Our objective is to reproduce what the LLMâin our case, GPT-3.5âpotentially observed during its pre-training phase when exposed to web data. To achieve this, we format the data in a way that encompasses the dataset title and its associated division, coupled with the entire details of the instance. We embed this information since it represents the minimal metadata an instance might possess when extracted from web data.
All data formats we used to introduce data contamination are listed in Table 9. Each dataset instance is formatted according to the provided formats, including both the name of the dataset and the spe- ciï¬c split from which it derives, as metadata. It is important to clarify that our approach completely differs from instruction tuning, as we do not incorporate any speciï¬c instructions within the data. | 2308.08493#87 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 88 | # Figure 8: Retrieval-augmented Chat without (W/O) and with (W/) interactive retrieval.
required information for a response. As a result, the LLM assistant (GPT-3.5-turbo) replies âSorry, I cannot find any information about who carried the USA flag in the opening ceremony. UPDATE CONTEXT.â With the unique and innovative ability to update context in Retrieval-Augmented Chat, the user proxy agent automatically updates the context and forwards it to the assistant agent again. Following this process, the agent is able to generate the correct answer to the question.
In addition, we conduct an experiment using the same prompt as illustrated in (Adlakha et al., 2023) to investigate the advantages of AutoGen W/O interactive retrieval. The F1 score and Recall for the first 500 questions are 23.40% and 62.60%, respectively, aligning closely with the results reported in Figure 4b. Consequently, we assert that AutoGen W/O interactive retrieval outperforms DPR due to differences in the retrievers employed. Specifically, we utilize a straightforward vector search retriever with the all-MiniLM-L6-v2 model for embeddings. | 2308.08155#88 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 88 | Due to our projectâs budget limitations and our emphasis on a manageable number of training sam- ples, we opt to work with one dataset for each task in our validation study. In particular, we take 100 random samples, ensuring they were evenly distributed based on the label, from the training splits of the AG News, RTE, and XSum datasets to expose the GPT-3.5 base model. For training, all default hyperparameters set by OpenAI are maintained during our continued training phase. Upon training completion, we utilize particular checkpoints provided by OpenAI. For every experiment, the base model of GPT-3.5 is separately contaminated using each dataset split, resulting in three separate checkpoints, each associated with one of the aforementioned dataset splits.
18 | 2308.08493#88 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08493 | 89 | 18
Table 9: A complete list of all data formats used to contaminate the GPT-3.5 base model by fur- ther training. Each of these data formats is separately used to format every single instance with respect to the dataset task. Placeholders are as follows: {split name} indicates the split name; {dataset name} refers to the dataset name; {instance} represents a full instance in classiï¬- cation datasets; {sentence1} and {sentence2} stand for premise and hypothesis in NLI-based datasets; {document} and {summary} correspond to entire document and its summary for a sin- gle instance in the summarization datasets; and {label} is replaced with the input instanceâs label where applicable. | 2308.08493#89 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 90 | Scenario 2: Code Generation Leveraging Latest APIs from the Codebase. In this case, the ques- tion is âHow can I use FLAML to perform a classification task and use Spark for parallel training? Train for 30 seconds and force cancel jobs if the time limit is reached.â. FLAML (v1) (Wang et al., 2021) is an open-source Python library designed for efficient AutoML and tuning. It was open- sourced in December 2020, and is included in the training data of GPT-4. However, the question necessitates the use of Spark-related APIs, which were added in December 2022 and are not encom- passed in the GPT-4 training data. Consequently, the original GPT-4 model is unable to generate the correct code, due to its lack of knowledge regarding Spark-related APIs. Instead, it erroneously cre- ates a non-existent parameter, spark, and sets it to Trueâ. Nevertheless, with Retrieval-Augmented Chat, we provide the latest reference documents as context. Then, GPT-4 generates the correct code blocks by setting use spark and f orce cancel to Trueâ.
23
# A3: Decision Making in Text World Environments | 2308.08155#90 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 90 | # Task
Data Format
This is an instance from the {split name} split of the {dataset name} dataset. Classiï¬cation Instance: {instance} Label: {label} This is an instance from the {split name} split of the {dataset name} dataset. NLI Sentence 1: {sentence1} Sentence 2: {sentence2} Label: {label} This is an instance from the {split name} split of the {dataset name} dataset. Summarization Document: {document} Summary: {summary} | 2308.08493#90 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 91 | 23
# A3: Decision Making in Text World Environments
ALFWorld Executor Action Decision i Action decision: Pick up pencil 2 from desk 2 Reward & State Observation: On the desk 2, you see an alarmclock 3, a bowl 3, a creditcard 2, a mug 1, and a pencil 2. Assistant GroundingAgent ALFChat (two agents) ALFChat (three agents)
Figure 9: We use AutoGen to solve tasks in the ALFWorld benchmark, which contains household tasks described in natural language. We propose two designs: a two-agent design where the assistant agent suggests the next step, and the Executor executes actions and provides feedback. The three- agent design adds a grounding agent that supplies commonsense facts to the executor when needed.
ALFWorld (Shridhar et al., 2021) is a synthetic language-based interactive decision-making task. It comprises textual environments that aim to simulate real-world household scenes. Given a high- level goal (e.g., putting a hot apple in the fridge) and the description of the household environment, the agent needs to explore and interact with the simulated household environment through a textual interface. A typical task environment contains various types of locations and could require more than 40 steps to finish, which highlights the need for agents to decompose the goal into subtasks and tackle them one by one, while effectively exploring the environments. | 2308.08155#91 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 91 | F EXAMPLES OF REPLICAS GENERATED PRE AND POST CONTAMINATION OF GPT-3.5
In Table 10, we showcase two examples of exact replicas derived from our controlled contamina- tion study with GPT-3.5. These replicas are generated from the contaminated checkpoints obtained through additional training of the GPT-3.5 base model on the subset of the training partitions of the AG News and XSum datasets. Additionally, we highlight a near-exact match achieved from an instance in the training set of the GSM8k dataset when using GPT-4 as the underlying LLM. All these replicas are produced via the guided instructions.
# G DETAILED DESCRIPTION OF DATASETS
IMDB Movie Reviews Dataset. The IMDB Movie Reviews dataset is a balanced corpus of 50,000 movie reviews used for sentiment analysis tasks. It is split evenly into 25,000 training and 25,000 testing reviews, each further balanced for positive and negative sentiments. In this dataset, positive reviews are identiï¬ed by a score that is 7 or more out of 10, while negative reviews are denoted by a score that falls at 4 or below out of 10. | 2308.08493#91 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 92 | Detailed Workflow. We first propose a straightforward two-agent system with AutoGen, illustrated on the left-hand side of Figure 9, to tackle tasks from this benchmark. The system consists of an assistant agent and an executor agent. The assistant agent generates plans and makes action decisions to solve the tasks. The executor agent is tailored specifically for ALFWorld. It performs actions proposed by the assistant and reports action execution results in the household environment as feedback to the assistant. Due to the strict format requirements for the output format, we use the BLEU metric to evaluate the similarity of the output to all valid action options. The option with the highest similarity will be chosen as the action for this round. | 2308.08155#92 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 92 | AG News Dataset. The AG News dataset, a commonly used benchmark, encapsulates news articles from the AGâs corpus website. It is neatly divided into four categorical classes, namely world, sports, business, and science/technology. The dataset contains 496,835 categorized news articles from 2,000 news sources. For each class, the AG News dataset furnishes 30,000 training and 1,900 test samples.
Yelp Dataset. The dataset is sourced from the Yelp Dataset Challenge conducted in 2015, containing a massive number of 1,569,264 samples, all of which include review texts. This dataset is the foundation for two distinct classiï¬cation tasks. The ï¬rst task involves predicting the exact count of stars assigned by the user, while the second task is to predict the polarity label, with a perspective that categorizes 1- and 2-star ratings as negative, and 3- and 4-star ratings as positive. For the full-scale star rating prediction, the dataset includes 130,000 training samples and 10,000 testing samples for each star category. Similarly, the polarity-based dataset comprises 280,000 training samples along with 19,000 test samples, distributed among each polarity category.
19 | 2308.08493#92 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 93 | One major challenge encompassed in ALFWorld is commonsense reasoning. The agent needs to extract patterns from the few-shot examples provided and combine them with the agentâs general knowledge of household environments to fully understand task rules. More often than not, the as- sistant tends to neglect some basic knowledge of the household environment. Thanks to the easy-to- implement multi-agent conversational feature of AutoGen, enhancing the assistant agentâs reason- ing ability by adding a new grounding agent to provide commonsense facts for the decision-making agentâs reference becomes straightforward. By scrutinizing the failed attempts and summarizing the reasons for failure, we obtained a holistic understanding of the commonsense knowledge that the assistant agent lacks. Then, we set a grounding agent to provide this general knowledge when the task begins and whenever the assistant outputs the same action three times in a row. This ensures the assistant takes this commonsense knowledge into consideration and prevents it from getting stuck in outputting the same content or constantly apologizing. | 2308.08155#93 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 93 | 19
Table 10: Examples of exact matches from GPT-3.5 after contamination with a subset of train splits of AG News and XSum datasets, alongside a near-exact instance reproduction from the GSM8k training set using GPT-4. Match types (exact/near-exact matches) are provided only for the gener- ated completions from contaminated GPT-3.5 checkpoints, when contrasted with the completions from the GPT-3.5 base model (all completions by the GPT-3.5 base model are inexact matches). All completions are generated by guided instruction. Method | 2308.08493#93 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 94 | We compare our systemâs performance with ReAct, which treats ALFWorld as a text-completion task. ReAct (Yao et al., 2022) is a few-shot prompting technique that interleaves reasoning and acting, allowing for greater synergy between the two and significantly improving performance on both language and decision-making tasks. We integrate ReAct into AutoGen by modifying the prompts in a conversational manner. Following ReAct, we employ a two-shot setting. The few- shot prompts are obtained from the corresponding repository. As shown in Table 3, the two-agent
24 | 2308.08155#94 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 94 | Reference Instance and Its Replica by Guided Instruction Reference instance from the train split of AG News dataset: Article: Report: Pennington Set to Sign Extension with Jets (Reuters) Reuters - The New York Jets and quarterback Chad Pennington are close to a deal that would make the quarterback the highest- paid player in franchise history. Label: 1 (Sports) Instruction: You are provided with the ï¬rst piece of an instance from the train split of the AG News dataset. Finish the second piece of the instance as exactly appeared in the dataset. Only rely on the original form of the instance in the dataset to ï¬nish the second piece. Label: 1 (Sports) First Piece: Report: Pennington Set to Sign Extension with Jets (Reuters) Reuters - The New York Jets and quarterback Chad Pennington are close to a deal that would make the Second Piece: ââââââââââââââââââââââââââââââââââââââââ- GPT-3.5: GPT-4 ICL: Exact Match Human Eval.: Exact Match 28-year-old signal caller the highest-paid player in franchise history, according to sources familiar with the negotiations. The extension is expected to keep Pennington with the Jets for the next five seasons, solidifying his position as the teamâs leader on the | 2308.08493#94 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 95 | Task description . Looking quickly around you, you see a bed 1, a desk 2, a desk 1, a safe 1, a drawer 2, a drawer 1, a / the desklamp at desk 1, then goes to desk 2.) stant falls into the same loop as (a)) cant finds the desklamp at desk 2, Executor (to Assistant): On the desk 2, you s ed 3.. Assistant (to Executor) : |BHOUGHIll[NOw lilli=iNellal/BSwilll(i)jlllmAeeal a bowl 1, and a GroundingAgent (to Executor) st go to where the target to use the desklamp to look at the bowl. I'11 go to the desklamp object is before you cai now.] ACTION [go to desk 1] Executor (to Assistant): You turn on the amp 1. Hint: You must Executor (to Assistant): On the desk 1, you see a desklamp 1, a find and take the object before you can examine it. pencil 2... Assistant (to Executor) : Assistant: ACTION [use desklamp 1] Exacutor{to Assistant) : On the desk 2, you see a bowl 1, a ed Aésistant (to Executor) Executor (to | 2308.08155#95 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 95 | with the negotiations. The extension is expected to keep Pennington with the Jets for the next five seasons, solidifying his position as the teamâs leader on the field. The deal comes after Penningtonâs impressive performance last season, where he led the Jets to the playoffs and earned the NFL Comeback Player of the Year award. With this extension, the Jets are showing their commitment to Pennington and their belief in his ability to lead the team to success. ââââââââââââââââââââââââââââââââââââââââ- Contaminated GPT-3.5: quarterback the highest- paid player in franchise history. Reference instance from the train split of XSum dataset: | 2308.08493#95 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 96 | desklamp 1] Exacutor{to Assistant) : On the desk 2, you see a bowl 1, a ed Aésistant (to Executor) Executor (to Aswistant) Resistant (to Executor): Aaaietant (to Executor): ACTION [3 ere 1] Executor (to Aseistant) ty TERMINATE. WA (aseistant (to Executor) Executor (to Assistant): You turn on the desklamp 1. Assistant (to Executor): [HOUGHM [Now that (the) deSiclanp)|ils|EUEReE on, T can use it to look at the bowl (1) that I found earlier.] ACTION [look at bowl 1 under the desklamp 1] Executor (to Assistant) turn on (Assistant falls into an infinite loop turning o: Executor (to Assistant) iled. Reply TERMINATE. Assistant (to Executor) TE | 2308.08155#96 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 96 | Summary: Astronomers have found evidence for a planet being devoured by its star, yielding insights into the fate that will befall Earth in billions of years.
Instruction: You are provided with the ï¬rst piece of a one-sentence summary from the train split of the XSum dataset. Finish the second piece of the summary as exactly appeared in the dataset. Only rely on the original form of the summary in the dataset to ï¬nish the second piece. First Piece: Astronomers have found evidence for a planet being devoured by its star, yielding insights Second Piece: ââââââââââââââââââââââââââââââââââââââââ- GPT-3.5: into the process of planetary destruction and the fate of exoplanets. ââââââââââââââââââââââââââââââââââââââââ- Contaminated GPT-3.5: into the fate that will befall Earth in billions of years.
GPT-4 ICL: Exact Match Human Eval.: Exact Match | 2308.08493#96 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 97 | Figure 10: Comparison of results from two designs: (a) Two-agent design which consists of an assistant and an executor, (b) Three-agent design which adds a grounding agent that serves as a knowledge source. For simplicity, we omit the in-context examples and part of the exploration trajectory, and only show parts contributing to the failure/success of the attempt.
Method Pick Clean Heat Cool Look Pick 2 | All ReAct (avg) 63 52 48 71 61 24 54 ALFChat (2 agents)(avg) 61 58 57 67 50 19 54 ALFChat (3 agents)(avg) 79 64 70 76 78 41 69 ReAct (best of 3) TS 62 61 81 78 35 66 ALFChat (2 agents)(best of 3) | 71 61 65 716 67 35 63 AFLChat (3 agents)(best of 3) | 92 74 78 86 83 41 77
52 58 64 62 61 74 Table 3: Comparisons between ReAct and the two variants of ALFChat on the ALFWorld bench- mark. For each task, we report the success rate out of 3 attempts. Success rate denotes the number of tasks successfully completed by the agent divided by the total number of tasks. The results show that adding a grounding agent significantly improves the task success rate in ALFChat. | 2308.08155#97 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 97 | # Reference instance from the train split of GSM8k dataset:
Question: John writes 20 pages a day. How long will it take him to write 3 books that are 400 pages each?
Instruction: You are provided with the ï¬rst piece of a question from the train split of the GSM8k dataset. Finish the second piece of the question as exactly appeared in the dataset. Only rely on the original form of the question in the dataset to ï¬nish the second piece. First Piece: John writes 20 pages a day. How long will it take him to write 3 Second Piece: ââââââââââââââââââââââââââââââââââââââââ- GPT-4: books of 400 pages each?
# GPT-4 ICL: Near-exact Match Human Eval.: Near-exact Match | 2308.08493#97 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 98 | design matches the performance of ReAct, while the three-agent design significantly outperforms ReAct. We surmise that the performance discrepancy is caused by the inherent difference between dialogue-completion and text-completion tasks. On the other hand, introducing a grounding agent as a knowledge source remarkably advances performance on all types of tasks.
Case study. Figure 10 exemplifies how a three-agent design eliminates one root cause for failure cases. Most of the tasks involve taking an object and then performing a specific action with it (e.g., finding a vase and placing it on a cupboard). Without a grounding agent, the assistant frequently conflates finding an object with taking it, as illustrated in Figure 10a). This leads to most of the failure cases in âpickâ and âlookâ type tasks. With the introduction of a grounding agent, the assistant can break out of this loop and successfully complete the task
Takeaways. We introduced a grounding agent to serve as an external commonsense knowledge source, which significantly enhanced the assistantâs ability to make informed decisions. This proves that providing necessary commonsense facts to the decision-making agent can assist it in making more informed decisions, thus effectively boosting the task success rate. AutoGen brings both simplicity and modularity when adding the grounding agent.
25
A4: Multi-Agent Coding | 2308.08155#98 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 98 | 20
Recognizing Textual Entailment (RTE) Dataset. The Recognizing Textual Entailment (RTE) dataset originates from a succession of annual textual entailment challenges. These datasets were combined by the authors of the benchmark using data from four different editions: RTE1 (Dagan et al. 2005), RTE2 (Haim et al. 2006), RTE3 (Giampiccolo et al. 2007), and RTE5 (Bentivogli et al. 2009). The examples within these datasets were primarily formulated using text from news and Wikipedia sources. To maintain consistency, all these datasets were adapted into a two-class split. For those datasets that initially consisted of three classes, the categories of âneutralâ and âcontradictionâ were combined to form a single class termed ânot entailmentâ. The RTE dataset combined has 2,490 examples for training, 277 examples for validation, and 3,000 examples for testing. | 2308.08493#98 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 99 | 25
A4: Multi-Agent Coding
@ @ User © User auestion| to Final Answer Repeat until answering the userâs question or timeout Ve
Figure 11: Our re-implementation of OptiGuide with AutoGen streamlining agentsâ interactions. The Commander receives user questions (e.g., What if we prohibit shipping from supplier 1 to roastery 2?) and coordinates with the Writer and Safeguard. The Writer crafts the code and inter- pretation, the Safeguard ensures safety (e.g., not leaking information, no malicious code), and the Commander executes the code. If issues arise, the process can repeat until resolved. Shaded circles represent steps that may be repeated multiple times. | 2308.08155#99 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 99 | Winograd Natural Language Inference (WNLI) Dataset. The WNLI (Winograd Natural Lan- guage Inference) dataset is a benchmark for natural language understanding tasks, particularly for evaluating coreference resolution and pronoun disambiguation in context. The dataset is derived from the original Winograd Schema Challenge (Levesque et al. 2012) and contains sentence pairs where a pronoun needs to be resolved by determining whether it refers to the same entity as the previous sentence. While the dataset has a balanced training set between two classes, the test set is imbalanced, with 635 training examples, 146 testing examples, and 71 validation examples.
SAMSum Dataset. The SAMSum dataset, compiled by the Samsung R&D Institute in Poland, comprises around 16,000 English messenger-style conversations with summaries. These dialogues, created by linguists, reï¬ect a variety of styles, registers, and topics similar to real-life messenger interactions. Each conversation is annotated with a third-person summary and categorized based on the number of utterances, ranging from 3-30. The dataset primarily consists of two-person dialogues. | 2308.08493#99 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 100 | Detailed Workflow. The workflow can be described as follows. The end user initiates the in- teraction by posing a question, such as âWhat if we prohibit shipping from supplier 1 to roastery 2?â, marked by to the Commander agent. The Commander manages and coordinates with two LLM-based assistant agents: the Writer and the Safeguard. Apart from directing the flow of commu- nication, the Commander has the responsibility of handling memory tied to user interactions. This capability enables the Commander to capture and retain valuable context regarding the userâs ques- tions and their corresponding responses. Such memory is subsequently shared across the system, empowering the other agents with context from prior user interactions and ensuring more informed and relevant responses.
In this orchestrated process, the Writer, who combines the functions of a âCoderâ and an âInter- preterâ as defined in (Li et al., 2023a), will craft code and also interpret execution output logs. For in- stance, during code writing ( ), the Writer may craft code âmodel.addConstr(x[âsupplier1â, âroastery2â] == 0, âprohibitâ)â to add an additional constraint to answer the userâs question. | 2308.08155#100 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 100 | Extreme Summarization (XSum) Dataset. The Extreme Summarization (XSum) dataset serves as an evaluation dataset for abstractive single-document summarization systems. Its objective is to generate a concise one-sentence summary that answers the question, âWhat is the article about?â. The dataset comprises 226,711 news articles, each accompanied by a one-sentence summary. These articles were collected from BBC articles spanning the years 2010 to 2017 and cover a wide range of domains, including news, politics, sports, weather, business, technology, science, health, family, education, entertainment, and arts. The ofï¬cial random split allocates 90% (204,045 documents) for training, 5% (11,332 documents) for validation, and 5% (11,334 documents) for the test set, respectively. | 2308.08493#100 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 101 | After receiving the code, the Commander will communicate with the Safeguard to screen the code and ascertain its safety ( , the Commander will use external tools (e.g., Python) to execute the code and request the Writer to interpret the execution results for the userâs question ( ). For instance, the writer may say âif we prohibit shipping from supplier 1 to roastery 2, the total cost would increase by 10.5%.â Bringing this intricate process full circle, the Commander furnishes the user with the concluding answer (
If at a point there is an exception - either a security red flag raised by Safeguard (in ) or code execution failures within Commander, the Commander redirects the issue back to the Writer with essential information in logs ( might be repeated multiple times, until each user query receives a thorough and satisfactory resolution or until the timeout. This entire complex workflow of multi-agent interaction is elegantly managed via AutoGen.
The core workflow code for OptiGuide was reduced from over 430 lines to 100 lines using AutoGen, leading to significant productivity improvement. The new agents are customizable, conversable, and can autonomously manage their chat memories. This consolidation allows the coder and interpreter roles to merge into a single âWriterâ agent, resulting in a clean, concise, and intuitive implementation that is easier to maintain.
26 | 2308.08155#101 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 101 | Grade School Math 8k (GSM8k) Dataset. The GSM8k dataset is a curated dataset consisting of 8,500 linguistically diverse grade school math word problems, crafted meticulously by human au- thors. This collection is divided into 7,500 training examples and 1,000 designated for testing. The complexity of these problems varies, requiring between 2 to 8 sequential steps for resolution. Pre- dominantly, the solutions entail executing a series of basic arithmetic operationsânamely addition, subtraction, multiplication, and divisionâto deduce the ï¬nal answer. This dataset is ideal for tasks involving multi-step mathematical reasoning.
21 | 2308.08493#101 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 102 | 26
Manual Evaluation Comparing ChatGPT + Code Interpreter and AutoGen-based OptiGuide. ChatGPT + Code Interpreter is unable to execute code with private or customized dependencies (e.g., Gurobi), which means users need to have engineering expertise to manually handle multiple steps, disrupting the workflow and increasing the chance for mistakes. If users lack access or expertise, the burden falls on supporting engineers, increasing their on-call time.
We carried out a user study that juxtaposed OpenAIâs ChatGPT coupled with a Code Interpreter against AutoGen-based OptiGuide. The study focused on a coffee supply chain scenario, and an expert Python programmer with proficiency in Gurobi participated in the test. We evaluated both systems based on 10 randomly selected questions, measuring time and accuracy. While both sys- tems answered 8 questions correctly, the Code Interpreter was significantly slower than OptiGuide because the former requires more manual intervention. On average, users needed to spend 4 minutes and 35 seconds to solve problems with the Code Interpreter, with a standard deviation of approxi- mately 2.5 minutes. In contrast, OptiGuideâs average problem-solving time was around 1.5 minutes, most of which was spent waiting for responses from the GPT-4 model. This indicates a 3x saving on the userâs time with AutoGen-based OptiGuide. | 2308.08155#102 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 103 | While using ChatGPT + Code Interpreter, users had to read through the code and instructions to know where to paste the code snippets. Additionally, running the code involves downloading it and executing it in a terminal, a process that was both time-consuming and prone to errors. The response time from the Code Interpreter is also slower, as it generates lots of tokens to read the code, read the variables line-by-line, perform chains of thought analysis, and then produce the final answer code. In contrast, AutoGen integrates multiple agents to reduce user interactions by 3 - 5 times on average as reported in Table 4, where we evaluated our system with 2000 questions across five OptiGuide applications and measured how many prompts the user needs to type.
Table 4: Manual effort saved with OptiGuide (W/ GPT-4) while preserving the same coding perfor- mance is shown in the data below. The data include both the mean and standard deviations (indicated in parentheses). Dataset
netflow facility tsp coffee diet Saving Ratio 3.14x (0.65) 3.14x (0.64) 4.88x (1.71) 3.38x (0.86) 3.03x (0.31) | 2308.08155#103 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 104 | Table 13 and 15 provide a detailed comparison of user experience with ChatGPT+Code Interpreter and AutoGen-based OptiGuide. ChatGPT+Code Interpreter is unable to run code with private pack- ages or customized dependencies (such as Gurobi); as a consequence, ChatGPT+Code Interpreter requires users to have engineering expertise and to manually handle multiple steps, disrupting the workflow and increasing the chance for mistakes. If customers lack access or expertise, the bur- den falls on supporting engineers, increasing their on-call time. In contrast, the automated chat by AutoGen is more streamlined and autonomous, integrating multiple agents to solve problems and address concerns. This results in a 5x reduction in interaction and fundamentally changes the over- all usability of the system. A stable workflow can be potentially reused for other applications or to compose a larger one. | 2308.08155#104 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 105 | Takeaways: The implementation of the multi-agent design with AutoGen in the OptiGuide appli- cation offers several advantages. It simplifies the Python implementation and fosters a mixture of collaborative and adversarial problem-solving environments, with the Commander and Writer work- ing together while the Safeguard acts as a virtual adversarial checker. This setup allows for proper memory management, as the Commander maintains memory related to user interactions, provid- ing context-aware decision-making. Additionally, role-playing ensures that each agentâs memory remains isolated, preventing shortcuts and hallucinations
27
# A5: Dynamic Group Chat
Manager a weneeee, @ | i '@! | ' H f 1 a a G) | Response; | 1 @) a (ees) Alice User Proxy Bob Bob Manager 3. Broadcast 2. Ask the Speaker to Respond
Figure 12: A5: Dynamic Group Chat: Overview of how AutoGen enables dynamic group chats to solve tasks. The Manager agent, which is an instance of the GroupChatManager class, performs the following three stepsâselect a single speaker (in this case Bob), ask the speaker to respond, and broadcast the selected speakerâs message to all other agents | 2308.08155#105 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 106 | To validate the necessity of multi-agent dynamic group chat and the effectiveness of the role-play speaker selection policy, we conducted a pilot study comparing a four-agent dynamic group chat system with two possible alternatives across 12 manually crafted complex tasks. An example task is âHow much money would I earn if I bought 200 $AAPL stocks at the lowest price in the last 30 days and sold them at the highest price? Save the results into a file.â The four-agent group chat system comprised the following group members: a user proxy to take human inputs, an engineer to write code and fix bugs, a critic to review code and provide feedback, and a code executor for executing code. One of the possible alternatives is a two-agent system involving an LLM-based assistant and a user proxy agent, and another alternative is a group chat system with the same group members but a task-based speaker selection policy. In the task-based speaker selection policy, we simply ap- pend role information, chat history, and the next speakerâs task into a single prompt. Through the pilot study, we observed that compared with a task-style prompt, utilizing a role-play prompt in dy- namic speaker selection often leads to | 2308.08155#106 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 107 | Through the pilot study, we observed that compared with a task-style prompt, utilizing a role-play prompt in dy- namic speaker selection often leads to more effective consideration of both conversation context and role alignment during the process of generating the subsequent speaker, and consequently a higher success rate as reported in Table 5, fewer LLM calls and fewer termination failures, as reported in Table 6. | 2308.08155#107 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 109 | Task description Write a script to download all the pdfs from arxiv in last three days and save them under /arxiv folder. (a) two-agent chat User: A Write a script to download all the pdfs from arxiv in last three days an d save them under /arxiv folder. Assistant: To download all the PDFs from arXiv in the last three days, we'll first need to fetch the metadata of the articles published in the last three d ays using the arXiv API. Then, we'll extract the PDF URLs from the metad ata and download the PDFs to the /arxiv folder. Here's a Python script that does this. Save the script as âdownload arxi v_pdfs.py* and execute it. ~ âpython (python code) After executing the script, all the PDFs from arXiv in the last three da ys will be downloaded and saved under the /arxiv folder. User: exitcode: 1 (execution failed) Code output: Traceback (most recent call last): File âdownload _arxiv_pdfs.py", line 3, in <module> import requests ModuleNotFoundError: No module named 'requestsâ Assistant: I apologize for the oversight. It seems that the ârequests' library is n ot installed on your | 2308.08155#109 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 110 | ModuleNotFoundError: No module named 'requestsâ Assistant: I apologize for the oversight. It seems that the ârequests' library is n ot installed on your system. Please install the 'requests' library by ru nning the following command: sh pip install requests sae After installing the 'requests' library, please run the âdownload arxiv_ pdfs.py* script again. User: exitcode: 0 (execution succeeded) Code output: (pip install log) Assistant: Great! Now that you have installed the 'requests' library, please run th e âdownload_arxiv_pdfs.py* script again. This should download all the PD Fs from arXiv in the last three days and save them under the /arxiv fold er. User: exitecode: 0 (execution succeeded) Code output: (pip install log) Assistant: It seems you've executed the command to install the 'requests' library a gain. Please run the âdownload_arxiv_pdfs.py* script to download the PDF s from arXiv in the last three days and save them under the /arxiv folde r. User: (no output) Assistant User (mo output) Assistant User (no output) . repeat until max_auto_reply hit | 2308.08155#110 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 111 | and save them under the /arxiv folde r. User: (no output) Assistant User (mo output) Assistant User (no output) . repeat until max_auto_reply hit (b) group chat Welcome to the group chat! Work together to resolve my task. I'll reply [TERMINATE] when converstion ended. Admin: Critic, if code running fail, you ask engineer to rewrite code. n: engineer, you write python code step by step to resolve my task. in: executor, you run python code from enginnerand report bug Admin: Write a script to download all the pdfs from arxiv in last three d ays and save them under /arxiv folder. pip install arxiv pip install requests « (rest of code) . Executor: exitcode: 1 (execution failed) Code output: Collecting arxiv // pip install log .. Traceback (most recent call last}: File "", line 15, in <module> results - arxiv.query(query-query, max_results-100) AA RAARAARAAN AttributeError: module âarxiv' has no attribute 'query Critic: â* "sh (updated code) Executor: . (pip log) Traceback (most recent call last): File "", line 3, in <module> import | 2308.08155#111 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 112 | â* "sh (updated code) Executor: . (pip log) Traceback (most recent call last): File "", line 3, in <module> import arxiv_api ModuleNotFoundError: No module named âarxiv_apiâ Critic: I apologize for the confusion. It seems that there is no package n amed âarxiv- api' available. I made a mistake in my previous response. Let's re vert to the 'arxiv' package and modify the code accordingly. . (modified code) Executor: exitcode: 0 (execution) in: (Terminate) | 2308.08155#112 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 113 | Figure 13: Comparison of two-agent chat (a) and group chat (b) on a given task. The group chat resolves the task successfully with a smoother conversation, while the two-agent chat fails on the same task and ends with a repeated conversation.
29
A6: Conversational Chess
Chess Board Human/Al Chess PlayerA Human/Al Chess Player B Developing my knight to a good square. Your move. Challenging your pawn in the center. Your move.
Figure 14: A6: Conversational Chess: Our conversational chess application can support various scenarios, as each player can be an LLM-empowered AI, a human, or a hybrid of the two. Here, the board agent maintains the rules of the game and supports the players with information about the board. Players and the board agent all use natural language for communication. | 2308.08155#113 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 114 | In Conversational Chess, each player is a AutoGen agent and can be powered either by a human or an AI. A third party, known as the board agent, is designed to provide players with information about the board and ensure that playersâ moves adhere to legal chess moves. Figure 14 illustrates the scenarios supported by Conversational Chess: AI/human vs. AI/human, and demonstrates how players and the board agent interact. This setup fosters social interaction and allows players to express their moves creatively, employing jokes, meme references, and character-playing, thereby making chess games more entertaining for both players and observers (Figure 15 provides an example of conversational chess). | 2308.08155#114 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 115 | To realize these scenarios, we constructed a player agent with LLM and human as back-end options. When human input is enabled, before sending the input to the board agent, it first prompts the human player to input the message that contains the move along with anything else the player wants to say (such as a witty comment). If human input is skipped or disabled, LLM is used to generate the message. The board agent is implemented with a custom reply function, which employs an LLM to parse the natural language input into a legal move in a structured format (e.g., UCI), and then pushes the move to the board. If the move is not legitimate, the board agent will reply with an error. Subsequently, the player agent needs to resend a message to the board agent until a legal move is made. Once the move is successfully pushed, the player agent sends the message to the opponent. As shown in Figure 15, the conversation between AI players can be natural and entertaining. When the player agent uses LLM to generate a message, it utilizes the board state and the error message from the board agent. This helps reduce the chance of hallucinating an invalid move. The chat between one player agent and the board agent is invisible to the other player agent, which helps keep the messages used in chat completion well-managed. | 2308.08155#115 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 116 | There are two notable benefits of using AutoGen to implement Conversational Chess. Firstly, the agent design in AutoGen facilitates the natural creation of objects and their interactions needed in our chess game. This makes development easy and intuitive. For example, the isolation of chat messages simplifies the process of making a proper LLM chat completion inference call. Secondly, AutoGen greatly simplifies the implementation of agent behaviors using composition. Specifically, we utilized the register reply method supported by AutoGen agents to instantiate player agents and a board agent with custom reply functions. Concentrating the extension work needed at a single point (the reply function) simplifies the reasoning processes, and development and maintenance effort.
30
Indeed, the king's safety is paramount in the early game. Now, let's stir things up a bit. I'll move my ql GS pawn from e4 to e5, attacking your knight. S LS) i) Error: illegal uci: âe4e5' in ... 5 T'll move my pawn from d2 to d4, opening up (2) S lines for my queen and bishop.
(a) Conversation between two AI players (b) Conversation between the AI players (player white shown in blue) and the board agent for making a new move. | 2308.08155#116 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 117 | (a) Conversation between two AI players (b) Conversation between the AI players (player white shown in blue) and the board agent for making a new move.
Alright, let's kick things off. I'll S (ld move my pawn from e2 to e4. The e center of the board is the heart of the battle, isn't it? Ah, the King's Pawn Opening, a classic. Let's see how this plays S out. I'll move my pawn from e7 to e5.You know, I've always admired how chess reflects life. It's all about making strategic decisions, isn't it? Your move. Indeed, chess is a mirror of life in S (0 many ways. Every move matters, just like every decision in life. I'll . move my knight from gl to f3, aiming to control the center.
Figure 15: Example conversations during a game involving two AI player agents and a board agent.
To illustrate the effect facilitated by this board agent, we provide a demonstration of conversational chess without a board agent in Figure 16. In this demonstration, instead of employing an additional board agent for grounding, the system utilizes prompting for grounding by including the sentence âYou should make sure both you and the opponent are making legal moves.â in the system messages directed to both players. | 2308.08155#117 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 118 | Conversational Chess (b) W/ Board Agent Player White: Your move Player White: Your move Player black: Alright, let's start. I'll move my pawn from e7 Player black: Alright, let's start. I'll move my pawn from e7 to e5. to e5. Here's how the board looks now: rnbqkbne PPPPmPPP Here's how the board looks now: rnbqkbnar RNBQKBNR Your turn! Player White: Yes, it certainly is! Let's kee this up. I'll move my knight from b8 to c6. Here's the updated board: eee Me bgkb.r PPPP-pPPppPp -Mâ¢eone. Player White: Yes, it certainly is! Let's keep this up. a SRINOVSINPMRMIGHEREEGMESNEGMERN ere's the updated board: Dilip! meee se PPeP. PPP PPPP-PPP P.PF [email protected]. RNBQKB.R Pare Your move! You're giving me a tough time here. Board Agent: ~N. PPPP.PPP Player White: Sorry for the mistakes. RNBQKB.R Here's the updated board: Your move! You're giving me a tough time | 2308.08155#118 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 120 | Figure 16: Comparison of two designsâ(a) without a board agent, and (b) with a board agentâin Conversational Chess.
31
A7: Online Decision Making for Browser interactions
Action decision: Next action to perform on a web page Action decision = âClick the button : tay with xpath â//button[id = Action Decision subbtnâ]â* Environment State = Assistant 1 1 1 1 <div id="wrap" data-wob_ref="2" data-wob_eps="e0"> 1 <div ide"query">Click button ONE, then click button 1 THO.</div> 1 1 1 1 1 <div id="area" data-wob_ref="3" data-wob_eps="e0"> <button id="subbtn" style="position:absolute; 5 top:74px" data-wob_refa"4â data- ONE</button> d="subbtn2" style="position: absolute; p:167px" data-wob_refa"5" data~ wob_eps="e0">TWO</button> </div> Reward & State </dive Environment State: HTML code for current web pages Reward: Success/Fail/Ongoing Reward = â0â (Ongoing) | 2308.08155#120 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 121 | Figure 17: We use AutoGen to build MiniWobChat, which solves tasks in the MiniWob++ bench- mark. MiniWobChat consists of two agents: an assistant agent and an executor agent. The assistant agent suggests actions to manipulate the browser while the executor executes the suggested actions and returns rewards/feedback. The assistant agent records the feedback and continues until the feed- back indicates task success or failure.
In practice, many applications require the presence of agents capable of interacting with environ- ments and making decisions in an online context, such as in game playing (Mnih et al., 2013; Vinyals et al., 2017), web interactions (Liu et al., 2018; Shi et al., 2017), and robot manipulations (Shen et al., 2021). With the multi-agent conversational framework in AutoGen, it becomes easy to de- compose the automatic agent-environment interactions and the development of a decision-making agent by constructing an executor agent responsible for handling the interaction with the environ- ment, thereby delegating the decision-making part to other agents. Such a decomposition allows developers to reuse the decision-making agent for new tasks with minimal effort rather than build- ing a specialized decision-making agent for every new environment. | 2308.08155#121 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 122 | Workflow. We demonstrate how to use AutoGen to build a working system for handling such scenarios with the MiniWoB++ benchmark (Shi et al., 2017). MiniWoB++ comprises browser in- teraction tasks that involve utilizing mouse and keyboard actions to interact with browsers. The ultimate objective of each task is to complete the tasks described concisely in natural language, such as âexpand the web section below and click the submit button.â Solving these tasks typically requires a sequence of web manipulation actions rather than a single action, and making action decisions at each time step requires access to the web status (in the form of HTML code) online. For the example above, clicking the submit button requires checking the web status after expanding the web section. We designed a straightforward two-agent system named MiniWobChat using AutoGen, as shown in Figure 17. The assistant agent is an instance of the built-in AssistantAgent and is responsible for making action decisions for the given task. The second agent, the executor agent, is a customized UserProxyAgent, which is responsible for interacting with the benchmark by executing the actions suggested by the AssistantAgent and returning feedback. | 2308.08155#122 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 123 | To assess the performance of the developed working system, we compare it with RCI (Kim et al., 2023), a recent solution for the MiniWoB++ benchmark that employs a set of self-critiquing prompts and has achieved state-of-the-art performance. In our evaluation, we use all available tasks in the official RCI code, with varying degrees of difficulty, to conduct a comprehensive analysis against MiniWobChat. Figure 18 illustrates that MiniWobChat achieves competitive performance in this evaluation8. Specifically, among the 49 available tasks, MiniWobChat achieves a success rate of 52.8%, which is only 3.6% lower than RCI, a method specifically designed for the MiniWob++ benchmark. It is worth noting that in most tasks, the difference between the two methods is mirrored as shown in Figure 18. If we consider 0.1 as a success rate tolerance for each task, i.e., two methods that differ within 0.1 are considered to have the same performance, both methods outperform the
8We report the results of RCI by running its official code with default settings.
32
other on the same number of tasks. For illustration purposes, we provide a case analysis in Table 7 on four typical tasks. | 2308.08155#123 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 124 | 32
other on the same number of tasks. For illustration purposes, we provide a case analysis in Table 7 on four typical tasks.
Additionally, we also explored the feasibility of using Auto-GPT for handling the same tasks. Auto- GPT faces challenges in handling tasks that involve complex rules due to its limited extensibility. It provides an interface for setting task goals using natural language. However, when dealing with the MiniWob++ benchmark, accurately instructing Auto-GPT to follow the instructions for using MiniWob++ proves challenging. There is no clear path to extend it in the manner of the two-agent chat facilitated by AutoGen.
Takeaways: For this application, AutoGen stood out as a more user-friendly option, offering mod- ularity and programmability: It streamlined the process with autonomous conversations between the assistant and executor, and provided readily available solutions for agent-environment interactions. The built-in AssistantAgent was directly reusable and exhibited strong performance without cus- tomization. Moreover, the decoupling of the execution and assistant agent ensures that modifications to one component do not adversely impact the other. This convenience simplifies maintenance and future updates.
mmm RCI mmm ~MiniWobChat 1.0 0.5 success rate | 2308.08155#124 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 125 | mmm RCI mmm ~MiniWobChat 1.0 0.5 success rate
Figure 18: Comparisons between RCI (state-of-the-art prior work) and MiniWobChat on the Mini- Wob++ benchmark are elucidated herein. We utilize all available tasks in the official RCI code, each with varying degrees of difficulty, to conduct comprehensive comparisons. For each task, the success rate across ten different instances is reported. The results reveal that MiniWobChat attains a performance comparable to that of RCI. When a success rate tolerance of 0.1 is considered for each task, both methods outperform each other on an equal number of tasks.
Table 7: Cases analysis on four typical tasks from MiniWob++. Main failure reason N/A. N/A. AssistantAgent provides actions with infeasible characters. RCI performs actions that are out of its plan. AssistantAgent provide actions with redundant content that can not convert to actions in the benchmark. RCI provides a wrong plan in most cases. AssistantAgent return actions out of its plan. RCI provides a wrong plan in most cases. | 2308.08155#125 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 126 | Correctness AutoGen: 10/10 RCI: 10/10 AutoGen: 5/10 click-dialog click-checkboxes-large RCI: 0/10 AutoGen: 2/10 count-shape RCI: 0/10 AutoGen: 0/10 RCI: 1/10 use-spinner
33
# E Example outputs from applications
In this section, we include example outputs from the following applications and systems:
⢠Application A1: autonomous solving process of one problem with: ChatGPT + Plugin (Table 8), AutoGen (Table 9), LangChain ReAct (Table 10), AutoGPT (Table 11), Multi-Agent Debate (Ta- ble 12), and ChatGPT + Code Interpreter (Table 14).
⢠Application A4: an OptiGuide problem with AutoGen (Table 15) and ChatGPT + Code Inter- preter (Table 13).
⢠Application A1: preliminary evaluation on several alternative multi-agent systems: BabyAGI (Table 16), CAMEL (Table 17), and MetaGPT (Table 18).
Table 8: Application A1, Scenario 1: Autonomous math problem-solving. ChatGPT+ Plugin (Wol- fram Alpha) on the first qualitative example. The response from Wolfram contains the correct an- swer, but ChatGPT selected the wrong one.
# Action ChatGPT + Plugin
# g | 2308.08155#126 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 127 | |
Input | Completely simplify and rationalize the denominator: 160 x 245 252 V/108 REQUEST TO WOLFRAM: 1 2 âinput": "simplify (sqrt (160) /sqrt (252))* (sqrt (245) /sqrt (108) )" +} RESPONSE FROM WOLFRAM | Query: âsimplify (sqrt (160) /sqrt (252) )*(sqrt (245) /sqrt (108) )" Input interpretation: simplify | sqrt (160)/sqrt(252)x sqrt (245)/sqrt (108) Results: (5 sqrt (42))/27 ) (8 sqrt (14/3))/9 2 Decimal approximation: 3. 1.2001371663718259686974013770533327143898526494552... Wolfram|Alpha website result for "simplify (sqrt (160) /sqrt (252) ) *(sqrt (245) /sqrt (108) )": (link] The completely simplified and rationalized form of the given expression is: 5/14/3 9 In decimal form, this is approximately 1.20013716637182596869740 1377053332714...
34 | 2308.08155#127 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 128 | 34
Table 9: Application A1, Scenario 1: Autonomous math problem-solving. AutoGen on the first qualitative example. AutoGen solves the problem correctly.
# Action
# AutoGen
Action AutoGen é Init chat Completely simplify and rationalize the denominator: 160 x 245 252 108 * We can simplify and rationalize this denominator using Pythonâs symbolic mathematics library âsympy*. Here is the Python code to resolve this: | from sympy import sqrt, Rational # Define the entire fraction as a rational number fraction = (sqrt (160)/sqrt (252) ) *(sqrt (245) /sqrt (108) ) # Simplify the fraction simplified = fraction. simplify () 9 # Print the simplified fraction 0 print (simplified) This code performs the requested simplification by constructing the fraction using the âRational()â function from the âsympyâ module, which automatically simplifies frac- tions when possible. The âsqrt()â function from the âsympyâ module enables square root calculations. Finally, the âsimplify()â function simplifies expressions when possible. & | Auto Reply | exitcode: 0 (execution succeeded) Code output: 5*sqrt(42)/27 * TERMINATE | 2308.08155#128 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 130 | Action LangChain ReAct Input Completely simplify and rationalize the denominator: â â 160 252 Ã â â 245 108 Entering new AgentExecutor chain... To simplify this expression, I will first simplify each square root individually, then multiply the fractions as normal. The result should be a rational number. Action: Python REPL Action Input: 1 import math 2 3 # Calculate square roots 4 sqrt_160 = math . sqrt (160) 5 sqrt_252 = math . sqrt (252) 6 sqrt_245 = math . sqrt (245) 7 sqrt_108 = math . sqrt (108) 8 9 # Multiply fractions 10 numerator = sqrt_160 * sqrt_245 11 denominator = sqrt_252 * sqrt_108 12 result = numerator / denominator 13 14 print ( result ) Observation: 1.200137166371826 Thought: I now know the final answer Final Answer: The simplified and rationalized form of the expression is approximately 1.200137166371826. Finished chain.
35
Table 11: Application A1, Scenario 1: Autonomous math problem-solving. AutoGPT on the first qualitative example. We abort as the code never prints the result, and the system does not rectify it after 2 turns. AutoGPT is much more verbose compared to other frameworks. We have shortened some unnecessary content.
| | 2308.08155#130 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 132 | Human y System MathSolverGPT has been created with the following details: Name: MathSolverGPT Role: an intelligent math-solving AI that assists students, professionals, and enthusiasts in solving a wide range of math problems, from basic arithmetic to advanced calculus and beyond. Goals: - Provide step-by-step solutions to math problems, ensuring clarity and understanding for the user. - Offer explanations and insights into the underlying concepts and principles behind the solutions, fostering a deeper understanding of mathematics. - Handle a variety of math problem types, including algebra, geometry, trigonometry, calculus, and more. ... MATHSOLVERGPT THOUGHTS: Without a specific task or question, itâs difficult to determine exactly one command to use. The user hasnât provided any context or problem to solve. REASONING: The userâs request is too vague. They havenât specified what they want me to do or what problem they want me to solve. PLAN: - Ask the user for more information or a specific task. CRITICISM: I should have asked the user for more information or a specific task in the first place. SPEAK: Could you | 2308.08155#132 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 133 | the user for more information or a specific task. CRITICISM: I should have asked the user for more information or a specific task in the first place. SPEAK: Could you please provide more information or specify a task? NEXT ACTION: COMMAND = ARGUMENTS = {} System Enter âyâ to authorise command, ây -Nâ to run N continuous commands, ânâ to exit program, or enter feedback for MathSolverGPT... Asking user via keyboard... Human Completely simplify and rationalize the denominator: â â 160 252 Ã â â 245 108 MATHSOLVERGPT THOUGHTS: To solve this problem, I need to simplify and rationalize the denominator of the given expression. This involves simplifying the square roots and then multiplying the fractions. I can use Pythonâs math library to calculate the square roots and perform the multiplication. REASONING: The problem involves simplifying square roots and multiplying fractions. Pythonâs math library provides functions for these operations. PLAN: - Write a Python script to calculate the square roots and multiply the fractions. - Execute the script to get the simplified expression. CRITICISM: I | 2308.08155#133 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 134 | - Write a Python script to calculate the square roots and multiply the fractions. - Execute the script to get the simplified expression. CRITICISM: I need to ensure that I correctly calculate the square roots and perform the multiplication. I should also make sure to rationalize the denominator, as the problem specifies. SPEAK: Letâs simplify and rationalize the denominator of the given expression. NEXT ACTION: COMMAND = execute python code ARGUMENTS = 1 { â code â: â import math \ n \ n # Calculate the square roots \ nsqrt_160 = math . sqrt (160) \ nsqrt_252 = math . sqrt (252) \ nsqrt_245 = math . sqrt (245) \ nsqrt_108 = math . sqrt (108) \ n \ n # Multiply the fractions \ nresult = ( sqrt_160 / sqrt_252 ) * ( sqrt_245 / sqrt_108 ) \ n \ n # Simplify the result \ ns im p li f i e d _ r e s u lt = math . simplify ( result ) \ n \ n s i m p l i f ie d _r es u lt â , â name â: â s | 2308.08155#134 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 135 | lt = math . simplify ( result ) \ n \ n s i m p l i f ie d _r es u lt â , â name â: â s i m p l if y _ e x p r e s s io n â} Human y System -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-= Executing python file â/app/autogpt/workspa... SYSTEM: Command execute python code returned: Error: Traceback (most recent call last): File â...â, line 13, in ¡module¿ simplified result = math.simplify(result) AttributeError: module âmathâ has no attribute âsimplifyâ MATHSOLVERGPT THOUGHTS: The error message indicates that the âmathâ module in Python does not have a âsimplifyâ function. I need to use the âsympyâ library instead, which provides a function for simplifying expressions. REASONING: The âmathâ module in Python does not provide a function for simplifying expressions. The âsympyâ library, on the other hand, does provide such a function. PLAN: - Rewrite the Python script to use the âsympyâ library instead | 2308.08155#135 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 136 | library, on the other hand, does provide such a function. PLAN: - Rewrite the Python script to use the âsympyâ library instead of the âmathâ module. - Execute the script to get the simplified expression. CRITICISM: I should have known that the âmathâ module does not provide a function for simplifying expressions. I should have used the âsympyâ library from the start. SPEAK: Letâs try again using the âsympyâ library to simplify the expression. NEXT ACTION: COMMAND = execute python code ARGUMENTS = | 2308.08155#136 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 137 | 1 { â code â: â import sympy as sp \ n \ n # Calculate the square roots \ nsqrt_160 = sp . sqrt (160) \ nsqrt_252 = sp . sqrt (252) \ nsqrt_245 = sp . sqrt (245) \ nsqrt_108 = sp . sqrt (108) \ n \ n # Multiply the fractions \ nresult = ( sqrt_160 / sqrt_252 ) * ( sqrt_245 / sqrt_108 ) \ n \ n # Simplify the result \ ns i m p l i f i e d _r e su lt = sp . simplify ( result ) \ n \ n s i m pl i fi e d_ re s ul t â , â name â: â s im p l i f y _ e x pr e s s i o n â}
# g
|
# Human
|
# y
|
# System
|
SYSTEM: Command execute python code returned: | 2308.08155#137 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 138 | MATHSOLVERGPT THOUGHTS: The command execution didnât return any output, which is unusual. It seems there might be an issue with the execution of the Python code. I need to debug this. REASONING: The Python code execution should have returned a result. The absence of a result suggests that there might be an issue with the code or its execution. PLAN: - Review the Python code to identify any potential issues. - If necessary, modify the code and execute it again. CRITICISM: I should have ensured that the Python code was correct and would produce a result before executing it. SPEAK: It seems there was an issue with the execution of the Python code. Letâs try again. NEXT ACTION: COMMAND = execute python code ARGUMENTS = ... (Exact same code as last)
|
# System
|
SYSTEM: Command execute python code returned: | 2308.08155#138 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 140 | AutoGen Completely simplify and rationalize the denominator: 160 245 x 252 V108 ===== Debate Round-1 ===== â Affirmative side â Ok, letâs simplify each square root separately. We want to find the simplest square root that can divide 160, 252, 245, and 108. 1. 160: 160 can be decomposed as 16*10, where 16 is a perfect square, so /160 = V16 * 10 = V16 * V10 = 4V/10 2. »/252 : 252 can be decomposed as 36*7, where 36 is a perfect square, so \/252 = 36 * 7 = 6/7 3. Â¥245 = V49 * 5, where 49 is a perfect square, so W245 = 7 * V5 4. /108 : 108 can be decomposed as 36*3, where 36 is a perfect square, so M108 = V36 * 3 = 6V3 Then we substitute the simplified square roots back into the expression, we have: VI60 245 _ 4v10 | I 4*7 [10 «5 50 x .4f/â Vis. Vi0s ov * 6x6. We are almost done, but we still need to rationalize the square root part of expression. Letâs multiply the square root part by /21 at both | 2308.08155#140 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 141 | * 6x6. We are almost done, but we still need to rationalize the square root part of expression. Letâs multiply the square root part by /21 at both the top and bottom to get rid of the square root at the denominator: 7 (2 7/1050 9 *21 189 Then this is the final answer! The expression 160 245 x 252 VÂ¥108 can be simplified to 74880 â Negative side â I do disagree with your final answer, but I think we mostly agree on the steps. Your simplifications of the square roots are correct, but you made a mistake in the final simplification part. Let me explain how I would complete the problem: As stated, VI60 V245 4/10 775 x = x . V252~=â«V/108 6V7 6V3 This should then simplify to: â 28V50 â 7V50 36/21 921 You made a mistake in your calculation because you considered square root of 50/21, and you should have considered the simplified square root of 50 over the simplified square root of 21. The rationalization would become: | 2308.08155#141 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 143 | .
189
Letâs now see if we disagree on this final answer. ââ Moderator ââ âWhether there is a preferenceâ: âYesâ, âSupported Sideâ: âNegativeâ, âReasonâ: âBoth sides of the debate came to the same final answer. However, the negative side correctly identified a mistake in the affirmative sideâs calculation, pointing out that the simplification should have been the square root of 50 over the square root of 21, not 50 over 21. The negative side shows a better understanding of the simplification process.â, âdebate answerâ: â 7 ===== Debate Done! ===== ââ Base Answer ââ Ok, letâs simplify each square root separately. We want to find the simplest square root that can divide 160, 252, 245, and 108. â 160: 160 can be decomposed as 16*10, where 16 is a perfect square, so 1. 252 : 252 can be decomposed as 36*7, where 36 is a perfect square, so 2. â 245 = 5 3. 108 : 108 can be decomposed as 36*3, where 36 is a perfect square, so 4. Then we substitute the simplified square roots back into the expression, we have:
1050
â
189
â
â
â
â | 2308.08155#143 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 144 | 1050
â
189
â
â
â
â
â 160 252 à â 245 108 = 4 10 â 6 7 à 7 6 â 5 3 = 4 â 7 6 â 6 · 10 â 5 7 â 3 = 7 9 · 50 21
We are almost done, but we still need to rationalize the square root part of expression. Letâs multiply the square root part by at both the top and bottom to get rid of the square root at the denominator:
â
7 9 · 50 â 21 21 â 21 = 7 1050 189 . â â 160 252 à â â 245 108 â 1050 189 . 7 â 1050 189
Then this is the final answer! The expression
can be simplified to 7 ââ Debate Answer ââ
ââ Debate Reason ââ Both sides of the debate came to the same final answer. However, the negative side correctly identified a mistake in the affirmative sideâs calculation, pointing out that the simplification should have been the square root of 50 over the square root of 21, not 50 over 21. The negative side shows a better understanding of the simplification process.
37
â
Table 13: Application A3. ChatGPT+ Code Interpreter for OptiGuide. A sample question âWhat if the roasting cost is increased by 5% because of the potential salary increase?â is asked.
# Action
|
| | 2308.08155#144 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 145 | é Prompt Writer Customer open Web browser. For the source code âcoffee.pyâ, Can you tell me: What if the roasting cost is increased by 5% because of the potential salary increase? * Sure! Letâs first take a look at the contents of the âcoffee.pyâ file to understand the code and how the roasting cost is calculated. This will allow us to determine the effect of a 5% increase in the roasting cost. ..several lines omitted Would you like me to modify the code to reflect the 5% increase in roasting costs and run the modified code to observe the impact? If the code requires any specific input or parameters, please let me know! & | Prompt Writer | Yes * Great! Before proceeding, I'll need to ensure that the Gurobi solver is not being used in this code, as my current environment does not have access to Gurobi. If the code does indeed use Gurobi, we can still analyze the effect of the change in roasting costs by understanding the structure of the code. ..several lines omitted Alternatively, if you have access to a Gurobi license, you can make the necessary changes in the code and run it on your local machine. I can guide you through the process of making the necessary | 2308.08155#145 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 146 | if you have access to a Gurobi license, you can make the necessary changes in the code and run it on your local machine. I can guide you through the process of making the necessary modifications to the code. Would you like me to proceed with the code analysis and explanation, or would you like guidance on making the changes yourself? é | Prompt Writer | You can give me the full code, and I can run the code locally myself. * Certainly! I'll modify the code to reflect a 5% increase in both the light and dark roasting costs, and then provide the full code for you to run locally. Download Modified Coffee.py If you have any questions or need further assistance, please feel free to ask! é Prompt Safeguard Hi GPT, can you safecheck the generated code for me? Upload the generated code. * | | DANGER ry Prompt Writer I have tried to run your code. However, the code triggered our safeguard, and it is not safe to run. Try again. Hint: donât change roasting _cost_light or roasting _cost_light variables. * I apologize for the oversight. It appears I made direct changes to the variables that could have led to issues with the code. Letâs take a different | 2308.08155#146 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 147 | variables. * I apologize for the oversight. It appears I made direct changes to the variables that could have led to issues with the code. Letâs take a different approach by analyzing the code to identify the appropriate place to apply the 5% increase in roasting costs. Download Modified Safe Coffee.py é Prompt Safeguard Hi GPT, can you safecheck the generated code for me one more time? Upload the generated code. * | | DANGER é Prompt Writer I have tried to run your code again. However, the code triggered our safeguard, and it is not safe to run. Try again. Hint: donât change roasting_cost_light or roasting _cost_light variables. * I apologize for the continued confusion. Download Final Modified Safe Coffee.py é Prompt Safeguard Hi GPT, can you safecheck the generated code for me one more time? Upload the generated code. * | | SAFE é Run Code Obtain the terminal results: Optimizer version 9.5.1 build v9.5. 1rc2 (linux64) Thread count: 64 physical cores, 128 logical processors, using up to 32 threads [63 lines of output ignored] Optimal cost: 2526.5 é | 2308.08155#147 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 148 | Thread count: 64 physical cores, 128 logical processors, using up to 32 threads [63 lines of output ignored] Optimal cost: 2526.5 é Prompt Writer Here is the execution result. [Copy + Paste the terminal output here.] Can you extract the final answer to me? Remember to compare with the original cost of 2470. * Certainly! The execution result you've provided indicates the optimal cost for the modified scenario, where the roasting costs were increased by 5%. Hereâs the comparison: Original Cost: 2470 Modified Cost (with 5% increase in roasting costs): 2526.5 The increase in roasting costs has led to an increase in the overall cost by 56.5, reflecting the impact of the potential salary increase on the roasting cost. If you need any further insights or assistance, please let me know! | 2308.08155#148 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 151 | Input Completely simplify and rationalize the denominator: â â 160 252 à â â 245 108 To simplify and rationalize the expression, we first simplify the radicals in the numerator and denominator by factoring out the largest perfect squares from the numbers under the square root. The perfect square factors of 160, 252, 245, and 108 are 16, 4, 1, and 36, respectively. Letâs factorize and simplify: â â 160 252 à â â 245 108 = â â 16 à 10 4 à 63 â We can simplify this further by applying the property Python â 245 36 à 3 â â à â b and a b à c d = aÃc bÃd . ab = a 1 import math 2 3 # factoring the square roots 4 sqrt_160 = math . sqrt (16) * math . sqrt (10) 5 sqrt_252 = math . sqrt (4) * math . sqrt (63) 6 sqrt_245 = math . sqrt (245) 7 sqrt_108 = math . sqrt (36) * math . sqrt (3) 8 9 # simplify the fraction 10 numerator = sqrt_160 * sqrt_245 11 denominator = | 2308.08155#151 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 153 | The completely simplified and rationalized form of the given expression is approximately 1.200.
Table 15: Application A3. AutoGen with OptiGuide. The same question âWhat if the roasting cost is increased by 5% because of the potential salary increase?â is asked.
| Action | AutoGen é | Prompt | What if the roasting cost is increased by 5% because of the potential salary increase? * After increasing the roasting costs by 5%, Iâve re-calculated the optimal coffee distri- bution solution for the supply chain. As a result, the new optimal total cost has become 2526.5, compared to the initial cost of 2470.0. This indicates that the increased roasting costs have caused the overall distribution cost to rise by 56.5 units.
39
Table 16: Application A1. Scenario 1: autonomous math solving. Preliminary test with BabyAGI.
# Action
|
# BabyAGI
# ry
# g
# Set .env
OBJECTIVE=Solve math problems INITIAL TASK=Completely simplify and rationalize the denominator:
# Cd | 2308.08155#153 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 154 | â â 160 252 à â â 245 108 Solve math problems Initial task: Completely simplify and rationalize the denominator: â 160â 252 à â 245â 108 *****TASK LIST***** ⢠Completely simplify and rationalize the denominator: â 160â 252 à â 245â 108 *****NEXT TASK***** Completely simplify and rationalize the denominator: â 160â 252 à â 245â 108 *****TASK RESULT***** First, letâs simplify each square root separately: - â 160 = â 16 â 10 = â 16 â â 10 = 4 â â 10 - â 252 = â 4 â 63 = â 4 à â 63 = 2 â 63 - â 245 = â 49 â 5 = â 49 à â 5 = 7 â 5 - â 108 = â 36 â 3 = â 36 à â 3 = 6 â 3 Now, substitute these values into the original expression which gives: â 4 â 10 â 2 63 à 7 6 â â 5 3 = 2 â â â 10 63 à 7 2 â â 5 3 We get rid of the square root in the denominator | 2308.08155#154 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 155 | Ã 7 6 â â 5 3 = 2 â â â 10 63 Ã 7 2 â â 5 3 We get rid of the square root in the denominator by multiplying the fractions which gives: â â â â â 2 2 â 7 2 10 â 5 10 14 â 5 20 â 50 14 â â = = . 2 â = â = = 2 â 27 â 27â 27 7 27 27 189 2 â 63 â 3 To simplify further and rationalize the denominator, we multiply by 27 = 20â3 which gives: â 2â 27 â 27â 27 â â 6 27 . â = 60 = 20 = 20 Simplifying further gives: = 20 9 *****TASK CREATION AGENT PROMPT**** You are to use the result from an execution agent to create new tasks with the following objective: Solve math problems. The last completed task has the result: First, letâs simplify each square root separately: - â 54 6 â 27 â 6 . â â â â 160 = 16 â 10 = 16 â 10 10 = 4 â ...Repeat after âFirst, letâs simplify | 2308.08155#155 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 156 | 6 . â â â â 160 = 16 â 10 = 16 â 10 10 = 4 â ...Repeat after âFirst, letâs simplify each square..â Now, substitute these values into the original expression which gives: â 4 â 10 â 2 63 Ã 7 6 â â 5 3 = 2 â â â 10 63 Ã 7 2 â â 5 3 We get rid of the square root in the denominator by multiplying the fractions which gives: â â â â 2 14 â 5 14 â 10 â 5 50 2 â 7 = 2 2 â â = â = = 27 â 27â 27 7 2 â 63 â 3 189 which gives: â To simplify further and rationalize the denominator, we multiply by â â â 20 27 6 54 20 â 20 â 3 2 = â = = = â 27 27 27 27 10 â 27 60 â 27 = 6 . 20 â â 2 27 . Simplifying further gives: â = 20 245â 9 108 Based on the result, return a list of tasks to be completed in order to meet the objective. Return one task per line in your response. The result | 2308.08155#156 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 157 | 20 245â 9 108 Based on the result, return a list of tasks to be completed in order to meet the objective. Return one task per line in your response. The result must be a numbered list in the format: # First task # Second task The number of each entry must be followed by a period. If your list is empty, write âThere are no tasks to add at this time.â Unless your list is empty, do not include any headers before your numbered list or follow your numbered list with any other output. ****TASK CREATION AGENT RESPONSE**** 1. Simplify and rationalize the denominator for the expression â â 160â 252 6 . This result was based on this task description: Completely simplify and rationalize the denominator: Ã â 225â 45 â 200â 125 Ã 2. Simplify and rationalize the denominator for . | 2308.08155#157 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 158 | Ã 2. Simplify and rationalize the denominator for
â 289â 361
â 100â 72
Ã
# ...Until 10
# the expression
Adding new tasks to task storage âtask nameâ: sqrt225sqrt45timesfracsqrt200sqrt125â, âtask idâ: 2 âtask nameâ: âSimplify and rationalize the denominator for the expression fracsqrt289sqrt361timesfracsqrt100sqrt72â, âtask idâ: 3 ...Until task id 11 ****TASK PRIORITIZATION AGENT PROMPT**** tasked with prioritizing the You are pression fracsqrt225sqrt45timesfracsqrt200sqrt125 Simplify and rationalize the denominator sqrt289sqrt361timesfracsqrt100sqrt72 ...Aborted
40
Table 17: Application A1, Scenario 1: Autonomous math problem-solving. Preliminary test with CAMEL. We constructed two roles: âMath Solverâ and âPython Programmerâ. The resulting code cannot be executed. The conversation continues, and the roles are not keen on solving the problem and giving an answer.
|
# Action
# Action
# CAMEL
# g | 2308.08155#158 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 159 | Â¥ * System Se IEE A sistant sys Message ***## Al SystemMessage(role_name=âPython Programmerâ, role_type=<RoleType.ASSISTANT: âassistantâ >, meta_dict=âtaskâ: âUsing Python programming language, write a pro- gram that accepts four integers as input, applies square root function to each, simplifies and rationalizes the denominator of the fraction resulting from the multiplication of two fractions: a5 x ~ =. and returns the simplified expression.â, âassis- tant_roleâ: âPython Programmerâ, âuser_roleâ: âMath Solverâ, role="systemâ, content="Never forget you are a Python Programmer and I am a Math Solver. Never flip roles! ...) Seok ses oka se akok ese AT User sys Message***# SystemMessage(role_name=*Math Solverâ, role type=< RoleType.USER: âuserâ >, meta_dict="taskâ: âUsing Python programming language, write a program that accepts four integers as input, applies square root function to each, simplifies and rationalizes the 160 , V245 V252 V108" expression.â, | 2308.08155#159 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 160 | a program that accepts four integers as input, applies square root function to each, simplifies and rationalizes the 160 , V245 V252 V108" expression.â, âassistant_roleâ: âPython Programmerâ, âuser_roleâ: âMath Solverâ, role=âsystemâ, content="Never forget you are a Math Solver and I am a Python Programmer. Never flip roles! You will always instruct me.We share a common interest in collaborating to successfully complete a task.I must help you to complete the task.Here is the task: Using Python programming language, write a program that accepts four integers as input, applies square root function to each, simplifies and rationalizes VI60 ,, 245 V252 ~ V108 denominator of the fraction resulting from the multiplication of two fractions: and returns the simplified the denominator of the fraction resulting from the multiplication of two fractions: , and returns the simplified expression.. ...) Sebscieecee Original task prompts bese . . . . : 160 245 Completely simplify and rationalize the denominator: â== x P y pay VÂ¥252 VÂ¥108 Jeiteebebeet® Specified task prompt | 2308.08155#160 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 161 | Completely simplify and rationalize the denominator: â== x P y pay VÂ¥252 VÂ¥108 Jeiteebebeet® Specified task prompt hhc Using Python programming language, write a program that accepts four integers as input, applies square root function to each, 160 x wv 245 V252 108â simplifies and rationalizes the denominator of the fraction resulting from the multiplication of two fractions: and returns the simplified expression. See Binal task Prompt RRR Using Python programming language, write a program that accepts ... | AI User Instruction: Import the necessary libraries for the task. Input: > | Be Al Assistant Solution: To solve this task, we will need the âmath* library for the square root function and the âfractions* library to simplify the resulting fraction. Here is how to import these libraries: | import math ~ 2 from fractions import Fraction The âmath* library provides functions that deal with mathematical operations. The âsqrt function, which we will use in this task, is part of this library. The âfractionsâ library provides functions for creating and manipulating rational numbers. The âFraction* function, which we will use to simplify the resulting | 2308.08155#161 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 162 | library provides functions for creating and manipulating rational numbers. The âFraction* function, which we will use to simplify the resulting fraction, is part of this library. Next request. | AI User Instruction: Define a function that accepts four integers as input. Input: > | Oe Al Assistant Solution: Letâs define a function named âsimplify_fractionâ that accepts four integers as input. These integers will represent the numbers under the square root in the fractions. | def simplify_fraction(a, b, c, d): 2 pass In this function, âaâ, âbâ, âcâ, and âdâ are the parameters that will accept the four integers as input. The âpassâ statement is a placeholder and will be replaced with actual code in the next steps. Next request. | AI User Instruction: Inside the function, apply the square root function to each of the input integers. Input: | Be Al Assistant Solution: We will use the âmath.sqrtâ function from the âmathâ library to apply the square root to each of the input integers. Here is how to do it: def simplify_fraction(a, b, c, 4d): | 2308.08155#162 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 163 | library to apply the square root to each of the input integers. Here is how to do it: def simplify_fraction(a, b, c, 4d): sqrt_a = math.sqrt (a) sqrt_b = math.sqrt(b) sqrt_c = math.sqrt(c) sqrt_d = math.sqrt(d) wn in In this code, âsqrt_aâ, âsqrt_bâ, âsqrt_câ, and âsqrt_dâ are variables that store the square root of the input integers âaâ, âbâ, *câ, and *dâ respectively. Next request. | 2308.08155#163 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 165 | The boss has tasked me with creating a successful product that is efficient. ## Product Goals
1 [ 2 3 4 5 ]
ââ Create a product that meets the needs of the target market " , ââ Ensure the product is efficient and easy to use " , ââ Achieve a high level of customer satisfaction "
## User Stories
1 [ 2 ââ As a user , I want a product that is easy to navigate so that I can find what I need quickly " , 3 ââ As a user , I want a product that is reliable and efficient so that I can depend on it " , 4 ââ As a user , I want a product that meets my needs so that I can get value for my money " , 5 ââ As a user , I want a product that is constantly updated and improved so that I can enjoy the best features " , 6 ââ As a user , I want a product that offers good customer support so that I can get help when I need it "
7 ]
## Competitive Analysis | 2308.08155#165 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 166 | 7 ]
## Competitive Analysis
1 [ 2 3 ââ Product A : Offers similar features but lacks efficiency " , ââ Product B : Highly efficient but does not meet all the needs of the target market " , 4 ââ Product C : Has a high level of customer satisfaction but is not easy to navigate " , 5 6 7 ââ Product D : Is constantly updated and improved but lacks reliability " , ââ Product E : Offers good customer support but is not efficient " , ââ Product F : Is reliable and efficient but does not offer good customer support " , 8 ââ Our Target Product : Aims to combine all the best features of the competitors while improving on their weaknesses " 9 ] ## Competitive Quadrant Chart
# g Human Abort
42
Table 19: Application A7. Example output of MiniWobChat on one web manipulation task âclick- button-sequence.â
Action AutoGen
g Init chat | 2308.08155#166 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 167 | Executor agent to AssistantAgent: We have an autonomous computer control agent that can perform atomic instructions specified by natural language to control computers. There are two types of instructions it can execute. First, given the instruction that matches the regular expression ââ^type\s.{1,}$", it can type a list of characters via the keyboard. This instruction should specify the target keyboard input for the agent to type. Before this typing instruction, you should first locate the cursor by clicking the input box with click instruction. Second, given the instruction that matches the regular expression, ââ^clickxpath\s.{1,}$", it can click an HTML element with an xpath that is visible on the webpage. The target of this instruction should be a valid xpath. Below is the HTML code of the webpage where the agent should solve a task. 1 < div id = " wrap " data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button TWO . </ div > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = | 2308.08155#167 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 168 | > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = " position : absolute ; left :103 px ; top :87 px " data - wob_ref = " 4 " data - wob_eps = " e0 " >ONE </ button > 5 < button id = " subbtn2 " style = " position : absolute ; left :44 px ; top :97 px " data - wob_ref = " 5 " data - wob_eps = " e0 " >TWO </ button > </ div > 6 7 </ div > task: Click button ONE, then click button TWO. plan: 1. Click the button with xpath â//button[@id=âsubbtnâ]â. 2. Click the button with xpath â//button[@id=âsubbtn2â]â. Current task: Click button ONE, then click button TWO. plan: *************************************************************** AssistantAgent to Executor agent: 1. Click the button with xpath â//button[@id=âsubbtnâ]â. 2. Click the button with xpath â//button[@id=âsubbtn2â]â. *************************************************************** Executor agent to | 2308.08155#168 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 169 | 2. Click the button with xpath â//button[@id=âsubbtn2â]â. *************************************************************** Executor agent to AssistantAgent: Below is the HTML code of the webpage where the agent should solve a task. 1 < div id = " wrap " data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button TWO . </ div > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = " position : absolute ; left :103 px ; top :87 px " data - wob_ref = " 4 " data - wob_eps = " e0 " >ONE </ button > 5 < button id = " subbtn2 " style = " position : absolute ; left :44 px ; top :97 px " data - wob_ref = " 5 " data - wob_eps = " e0 " >TWO </ button > </ div > 6 7 </ div > Current task: Click button ONE, then click button TWO. Here is a plan you are following now. 1. Click the button with xpath | 2308.08155#169 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 170 | > </ div > 6 7 </ div > Current task: Click button ONE, then click button TWO. Here is a plan you are following now. 1. Click the button with xpath â//button[@id=âsubbtnâ]â. 2. Click the button with xpath â//button[@id=âsubbtn2â]â. We have a history of instructions that have been already executed by the autonomous agent so far. No instruction has been executed yet. Based on the plan and the history of instructions executed so far, the first instruction should be â *************************************************************** AssistantAgent to Executor agent: clickxpath //button[@id=âsubbtnâ] *************************************************************** Executor agent to AssistantAgent: Hold on, please wait for my next instruction. *************************************************************** AssistantAgent to Executor agent: Sure, Iâll be here when youâre ready with your next instruction. *************************************************************** Executor agent to AssistantAgent: Below is the HTML code of the webpage where the agent should solve a task. 1 < div id = " wrap " data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button | 2308.08155#170 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 171 | data - wob_ref = " 2 " data - wob_eps = " e0 " > 2 3 4 < div id = " query " > Click button ONE , then click button TWO . </ div > < div id = " area " data - wob_ref = " 3 " data - wob_eps = " e0 " > < button id = " subbtn " style = " position : absolute ; left :103 px ; top :87 px " data - wob_ref = " 4 " data - wob_eps = " e0 " data - tampered = " e0 " >ONE </ button > 5 < button id = " subbtn2 " style = " position : absolute ; left :44 px ; top :97 px " data - wob_ref = " 5 " data - wob_eps = " e0 " >TWO </ button > </ div > 6 7 </ div > | 2308.08155#171 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 172 | Current task: Click button ONE, then click button TWO. Here is a plan you are following now. 1. Click the button with xpath â//button[@id=âsubbtnâ]â. 2. Click the button with xpath â//button[@id=âsubbtn2â]â. We have a history of instructions that have been already executed by the autonomous agent so far. 1: clickxpath //button[@id=âsubbtnâ] Based on the plan and the history of instructions executed so far, the next proper instruction should be â *************************************************************** AssistantAgent to Executor agent: clickxpath //button[@id=âsubbtn2â] *************************************************************** Executor agent to AssistantAgent: SUCCESS!!!!
|
|
# TERMINATE
43 | 2308.08155#172 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.