doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.08155 | 1 | Figure 1: AutoGen enables diverse LLM-based applications using multi-agent conversations. (Left) AutoGen agents are conversable, customizable, and can be based on LLMs, tools, humans, or even a combination of them. (Top-middle) Agents can converse to solve tasks. (Right) They can form a chat, potentially with humans in the loop. (Bottom-middle) The framework supports flexible conversation patterns.
# Abstract
AutoGen2 is an open-source framework that allows developers to build LLM ap- plications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in vari- ous modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic framework for building diverse applications of various complexities and LLM capacities. Em- pirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answer- ing, operations research, online decision-making, entertainment, etc.
1Corresponding author. Email: [email protected] 2https://github.com/microsoft/autogen
# 1 Introduction | 2308.08155#1 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 1 | # Abstract
In this paper, we systematically study the potential of pre- training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we lever- age the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowl- edge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learn- ing and bottlenecked query generation. Furthermore, we in- corporate a curriculum learning strategy to reduce the re- liance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion sig- nificantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out- of-domain retrieval abilities, making it more widely applica- ble for retrieval when initializing with no human-labeled data. | 2308.08285#1 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 1 | Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in mea- suring LLMsâ real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ âguided instruction:â a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to com- plete it. An instance is ï¬agged as contaminated if the LLMâs output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The ï¬rst idea marks a dataset par- tition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically signiï¬cantly better with the completions from guided instruction | 2308.08493#1 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 2 | 1Corresponding author. Email: [email protected] 2https://github.com/microsoft/autogen
# 1 Introduction
Large language models (LLMs) are becoming a crucial building block in developing powerful agents that utilize LLMs for reasoning, tool usage, and adapting to new observations (Yao et al., 2022; Xi et al., 2023; Wang et al., 2023b) in many real-world tasks. Given the expanding tasks that could benefit from LLMs and the growing task complexity, an intuitive approach to scale up the power of agents is to use multiple agents that cooperate. Prior work suggests that multiple agents can help encourage divergent thinking (Liang et al., 2023), improve factuality and reasoning (Du et al., 2023), and provide validation (Wu et al., 2023). In light of the intuition and early evidence of promise, it is intriguing to ask the following question: how can we facilitate the development of LLM applications that could span a broad spectrum of domains and complexities based on the multi-agent approach? | 2308.08155#2 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 2 | Introduction Dense passage retrieval (Karpukhin et al. 2020) has broad real-world applications, like web search (Liu et al. 2021; Zou et al. 2023), retrieval-augmented generation (Lewis et al. 2020; Cai et al. 2022) and query answering (Sakata et al. 2019). It utilizes well-trained language-model-based retrievers to extract sentence representations and retrieve rel- evant passages with given queries. Recent studies have made impressive progress in improving the effectiveness of dense retrievers, such as hard negative mining (Qu et al. 2021), late interaction (Khattab and Zaharia 2020; Santhanam et al. 2022), distillation (Ren et al. 2021; Lu et al. 2022), and en- sembling (Gao and Callan 2022; Wu et al. 2023b). More- over, the development of task-specific pre-training (Gao and Callan 2021; Wu et al. 2023a; Liu and Shao 2022) pushes the limits of retrieval tasks to new boundaries. Specifically, those studies usually employ contrastive learning with span corruption (Gao and Callan 2022; Izacard et al. | 2308.08285#2 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 2 | with the reference instances (as measured by ROUGE-L or BLEURT) is statistically signiï¬cantly better with the completions from guided instruction compared to a âgeneral instructionâ that does not include the dataset and partition name. The second idea marks a dataset parti- tion as contaminated if a classiï¬er based on GPT-4 with few-shot in-context learn- ing prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy be- tween 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evalu- ation by human experts. Further, our ï¬ndings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets. | 2308.08493#2 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 3 | Our insight is to use multi-agent conversations to achieve it. There are at least three reasons con- firming its general feasibility and utility thanks to recent advances in LLMs: First, because chat- optimized LLMs (e.g., GPT-4) show the ability to incorporate feedback, LLM agents can cooperate through conversations with each other or human(s), e.g., a dialog where agents provide and seek rea- soning, observations, critiques, and validation. Second, because a single LLM can exhibit a broad range of capabilities (especially when configured with the correct prompt and inference settings), conversations between differently configured agents can help combine these broad LLM capabilities in a modular and complementary manner. Third, LLMs have demonstrated ability to solve complex tasks when the tasks are broken into simpler subtasks. Multi-agent conversations can enable this partitioning and integration in an intuitive manner. How can we leverage the above insights and support different applications with the common requirement of coordinating multiple agents, poten- tially backed by LLMs, humans, or tools exhibiting different capacities? We desire a multi-agent conversation framework with generic abstraction and effective | 2308.08155#3 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 4 | poten- tially backed by LLMs, humans, or tools exhibiting different capacities? We desire a multi-agent conversation framework with generic abstraction and effective implementation that has the flexibil- ity to satisfy different application needs. Achieving this requires addressing two critical questions: (1) How can we design individual agents that are capable, reusable, customizable, and effective in multi-agent collaboration? (2) How can we develop a straightforward, unified interface that can accommodate a wide range of agent conversation patterns? In practice, applications of varying complexities may need distinct sets of agents with specific capabilities, and may require different conversation patterns, such as single- or multi-turn dialogs, different human involvement modes, and static vs. dynamic conversation. Moreover, developers may prefer the flexibility to program agent interactions in natural language or code. Failing to adequately address these two questions would limit the frameworkâs scope of applicability and generality. While there is contemporaneous exploration of multi-agent approaches,3 we present AutoGen, a generalized multi-agent conversation framework (Figure 1), based on the following new concepts. 1 Customizable and conversable agents. AutoGen uses a | 2308.08155#4 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 4 | Large language models (LLMs), like ChatGPT (Ouyang et al. 2022), PaLM (Chowdhery et al. 2022), LLaMA (Tou- vron et al. 2023), and tk-Instruct (Wang et al. 2022b), are pre-trained on large-scale web corpus and exhibit excellent abilities in context generation and instruction follow- ing. There has been growing interest in incorporating pow- erful LLMs into retrieval tasks. Existing studies (Gao et al. 2023; Wang, Yang, and Wei 2023; Jagerman et al. 2023; Yu et al. 2023) focus on query expansion with LLMs for en- hancing the lexical match of query-passage pairs. They uti- lize the LLM-generated relevant passages as enriched query contexts. Those studies have yielded better retrieval per- formances, especially for zero-shot scenarios. Nevertheless, conducting query expansion still needs heavy online infer- ences with LLMs, which slows down the retrieval speed. | 2308.08285#4 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 4 | The rise of Transformer networks (Vaswani et al. 2017) has spurred the development of large lan- guage models (LLMs), marking a new epoch in Natural Language Processing (NLP). This shift has led to an extensive range of LLMs (Touvron et al. 2023a;b; Biderman et al. 2023; K¨opf et al. 2023; Chung et al. 2022; Penedo et al. 2023, inter-alia) which excel in various professional and academic benchmarks (Bang et al. 2023; Bubeck et al. 2023). Their superior performance is primarily at- tributed to the massive web data consumed by these billion/trillion-parameter LLMs during training. However, the impressive LLM performance observed on many downstream tasks (e.g., summariza- tion, natural language inference, text classiï¬cation) may be inï¬ated due to data contamination, i.e., the presence of test data from these downstream tasks in the pre-training data of LLMs. Guarantee- ing lack of contamination is not trivial due to two potential sources of contamination: directly from ingesting the ofï¬cial version of a dataset | 2308.08493#4 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 5 | present AutoGen, a generalized multi-agent conversation framework (Figure 1), based on the following new concepts. 1 Customizable and conversable agents. AutoGen uses a generic design of agents that can lever- age LLMs, human inputs, tools, or a combination of them. The result is that developers can easily and quickly create agents with different roles (e.g., agents to write code, execute code, wire in human feedback, validate outputs, etc.) by selecting and configuring a subset of built-in capabilities. The agentâs backend can also be readily extended to allow more custom behaviors. To make these agents suitable for multi-agent conversation, every agent is made conversable â they can receive, react, and respond to messages. When configured properly, an agent can hold multiple turns of conversations with other agents autonomously or solicit human inputs at cer- tain rounds, enabling human agency and automation. The conversable agent design leverages the strong capability of the most advanced LLMs in taking feedback and making progress via chat and also allows combining capabilities of LLMs in a modular fashion. (Section 2.1) | 2308.08155#5 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 5 | While query expansion expands the query with gener- ated passages, document expansion, i.e., query generation, is also a popular technique to boost retrieval performances. It exploits a fully fine-tuned model, like T5 (Nogueira et al. 2019) or BART (Cho et al. 2022), to generate relevant queries of a given passage, which either enrich the context of the passage or serve as additional fine-tuning corpus. Due to the excellent generation ability of LLMs, huge poten- tial lies in the utilization of LLMs as document expansion models. However, we argue that several drawbacks still hin- der such usage. Firstly, document expansion relies on the online inference of LLM in open-domain passage retrieval, particularly when dealing with candidate corpora from new domains. To avoid the need for additional LLM inferences during retrieval, a feasible solution is to pre-train or fine- tune an end-to-end retriever. However, this approach lacks exploration and necessitates training paradigms specifically designed for retrieval. Furthermore, document expansion in- volves feeding a substantial corpus into LLMs to generate queries, resulting in significant costs associated with LLM inferences. Unfortunately, there is a shortage of methods to mitigate these inference costs. | 2308.08285#5 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 5 | ing lack of contamination is not trivial due to two potential sources of contamination: directly from ingesting the ofï¬cial version of a dataset (easier to control), and indirectly through duplicated data found somewhere on the web (nearly impossible to control).1 The potential of data contamination is especially relevant for closed models such as the GPT-3/3.5 family (Brown et al. 2020) and GPT-4 | 2308.08493#5 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 6 | 2 Conversation programming. A fundamental insight of AutoGen is to simplify and unify com- plex LLM application workflows as multi-agent conversations. So AutoGen adopts a program- ming paradigm centered around these inter-agent conversations. We refer to this paradigm as conversation programming, which streamlines the development of intricate applications via two primary steps: (1) defining a set of conversable agents with specific capabilities and roles (as described above); (2) programming the interaction behavior between agents via conversation- centric computation and control. Both steps can be achieved via a fusion of natural and pro- gramming languages to build applications with a wide range of conversation patterns and agent behaviors. AutoGen provides ready-to-use implementations and also allows easy extension and experimentation for both steps. (Section 2.2)
3We refer to Appendix A for a detailed discussion.
2
AutoGen also provides a collection of multi-agent applications created using conversable agents and conversation programming. These applications demonstrate how AutoGen can easily support applications of various complexities and LLMs of various capabilities. Moreover, we perform both evaluation on benchmarks and a pilot study of new applications. The results show that AutoGen can help achieve outstanding performance on many tasks, and enable innovative ways of using LLMs, while reducing development effort. (Section 3 and Appendix D)
# 2 The AutoGen Framework | 2308.08155#6 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 6 | To mitigate the high online inference costs of LLM doc- ument expansion, as is presented in Figure 1, we prompt the LLM query generation for a series of pre-training ex- periments tailored for dense retrieval. We emphasize that our work only involves LLM inferences at the pre-training stage of retrievers, but not the inference stage as traditional query (Gao et al. 2023; Wang, Yang, and Wei 2023) or doc- ument expansion (Nogueira et al. 2019). Two pre-training paradigms, i.e., contrastive learning and bottlenecked query generation, are explored in detail.
*These authors contributed equally.
For contrastive pre-training, a direct contrastive loss of the
# LLaMA Prompts
# ### Instruction:
Generate ten search queries for the following passage
### Input: <passage>
# ### Response: Ne 7â © Tk-Instruct Prompts
/ââ~
Definition: Generate one search query in question or phrase format. The generated query should be unambiguous and related to the input.
# Positive Example 1-â
Input: <Example 1 - Input>
# Output: <Example 1
Output>
# Positive Example 2-â
Input: <Example 2 - Input>
Output: <Example 2 - Output>
Now complete the following example# Input: <passage> Ne Output:
) | 2308.08285#6 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 6 | 1While dataset licensing reduces indirect contamination to a certain extent, it does not eliminate it. For example, websites such as the Hugging Face page for datasets (Wolf et al. 2020) currently host copies of the OntoNotes (Weischedel et al. 2013) and CoNLL-2003 (Tjong Kim Sang & De Meulder 2003) datasets, despite the fact that their respective licenses prohibit it.
1
(OpenAI 2023; Bubeck et al. 2023), and, needless to say, raises questions on the validity of evalu- ations and benchmarks conducted so far (Chang et al. 2023; Zhu et al. 2023; Bordt & von Luxburg 2023; Ray 2023; Penedo et al. 2023).
To address this issue, we propose an inexpensive and robust approach to detect data contamination for a given dataset partition automatically. Importantly, our approach functions under two realistic assumptions: (a) we lack direct access to the pre-training data of the LLMs, and (b) we have limited computational resources. Intuitively, our method starts by identifying potential contamination in individual instances that are drawn from a small random sample of the corresponding dataset parti- tion (we use samples of 10 instances in this work). Using the information obtains from individual instances, our approach then assesses if an entire dataset partition is contaminated. | 2308.08493#6 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 7 | # 2 The AutoGen Framework
To reduce the effort required for developers to create complex LLM applications across various do- mains, a core design principle of AutoGen is to streamline and consolidate multi-agent workflows using multi-agent conversations. This approach also aims to maximize the reusability of imple- mented agents. This section introduces the two key concepts of AutoGen: conversable agents and conversation programming.
# 2.1 Conversable Agents
In AutoGen, a conversable agent is an entity with a specific role that can pass messages to send and receive information to and from other conversable agents, e.g., to start or continue a conversation. It maintains its internal context based on sent and received messages and can be configured to possess a set of capabilities, e.g., enabled by LLMs, tools, or human input, etc. The agents can act according to programmed behavior patterns described next. | 2308.08155#7 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 7 | Output>
# Positive Example 2-â
Input: <Example 2 - Input>
Output: <Example 2 - Output>
Now complete the following example# Input: <passage> Ne Output:
)
Figure 1: Query Generation prompts for Alpaca-LLaMA and tk-Instruct.
generated queries and passages is used to pull together their embeddings, while pushing away in-batch negatives in the latent space. We follow the contrastive architecture in (Gao and Callan 2022) for fair comparision, and we argue that LLM-generated queries can serve as the better context for effective query-passage alignment. | 2308.08285#7 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 7 | More formally, to identify contamination of individual instances, we employ a âguided instruction:â a prompt that integrates distinct identiï¬ers from the source dataset from which the reference instance originates. Such information includes the dataset name, its partition (e.g., train, test, or validation), and a randomly selected initial portion of the reference instance, complemented by its label when relevant. With these signals in the prompt, we instruct the LLM to ï¬nish the given partial instance. Using these generated individual completions, we propose two heuristics to estimate if an entire dataset partition is contaminated. The ï¬rst heuristic states that a partition is likely to be contaminated if the average overlap score between generated completions and reference instances (as measured by ROUGE-L (Lin 2004) and BLEURT (Sellam et al. 2020)) observed with the guided instruction is statistically signiï¬cantly larger than the one measured with a âgeneral instruction,â which does not include the dataset and partition name. The second heuristic labels a partition as contaminated if a classiï¬er based on GPT-4 with few-shot | 2308.08493#7 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 8 | Agent capabilities powered by LLMs, humans, and tools. Since an agentâs capabilities directly influence how it processes and responds to messages, AutoGen allows flexibility to endow its agents with various capabilities. AutoGen supports many common composable capabilities for agents, including 1) LLMs. LLM-backed agents exploit many capabilities of advanced LLMs such as role playing, implicit state inference and progress making conditioned on conversation history, providing feedback, adapting from feedback, and coding. These capabilities can be combined in different ways via novel prompting techniques4 to increase an agentâs skill and autonomy. AutoGen also offers enhanced LLM inference features such as result caching, error handling, message templating, etc., via an enhanced LLM inference layer. 2) Humans. Human involvement is desired or even essential in many LLM applications. AutoGen lets a human participate in agent conversation via human- backed agents, which could solicit human inputs at certain rounds of a conversation depending on the agent configuration. The default user proxy agent allows configurable human involvement levels and patterns, e.g., frequency and conditions for requesting human input including the option for humans to skip providing input. 3) Tools. Tool-backed agents have the capability to execute tools via code execution or function execution. For example, the default user proxy agent in AutoGen is able to execute code suggested by LLMs, or make LLM-suggested function calls. | 2308.08155#8 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 8 | Bottlenecked pre-training techniques are popular in re- cent works (Lu et al. 2021; Liu and Shao 2022; Wu et al. 2023a), which connect accessional decoders solely through the encoderâs representation. To pre-train with bottlenecked query generation, similar to (Wu, Ma, and Hu 2022), we adapt a single-layer Transformers decoder and use the casual language model (CLM) task to generate expanded queries with the assistance of the encoderâs embeddings. This bottle- necked encoder-decoder structure first derives a compressed representation through the encoder and then decompresses the context information as LLM-expanded queries via the decoder. As a result, the sentence embeddings contain en- riched context information, providing effective initialization for fine-tuning and inference. Especially, LLM-based doc- ument expansion requires no human-labeled corpus as pre- vious works (Wu, Ma, and Hu 2022; Cho et al. 2022) for training additional domain-specific generative models like docT5query (Nogueira et al. 2019).
Furthermore, to mitigate the LLM inference costs for document expansion, we interpolate a two-stage curriculum | 2308.08285#8 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 8 | the dataset and partition name. The second heuristic labels a partition as contaminated if a classiï¬er based on GPT-4 with few-shot in-context learning (ICL; Brown et al. (2020)) marks at least one generated completion as an exact match with the reference instance or at least two generated completions as near-exact matches, where near-exact match indicates a completion that exhibits considerable semantic and lexical alignment with the reference instance. | 2308.08493#8 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 9 | Agent customization and cooperation. Based on application-specific needs, each agent can be configured to have a mix of basic back-end types to display complex behavior in multi-agent con- versations. AutoGen allows easy creation of agents with specialized capabilities and roles by reusing or extending the built-in agents. The yellow-shaded area of Figure 2 provides a sketch of the built-in agents in AutoGen. The ConversableAgent class is the highest-level agent abstraction and, by default, can use LLMs, humans, and tools. The AssistantAgent and UserProxyAgent are two pre-configured ConversableAgent subclasses, each representing a common usage mode, i.e., act- ing as an AI assistant (backed by LLMs) and acting as a human proxy to solicit human input or execute code/function calls (backed by humans and/or tools).
In the example on the right-hand side of Figure 1, an LLM-backed assistant agent and a tool- and human-backed user proxy agent are deployed together to tackle a task. Here, the assistant agent generates a solution with the help of LLMs and passes the solution to the user proxy agent. Then, the user proxy agent solicits human inputs or executes the assistantâs code and passes the results as feedback back to the assistant. | 2308.08155#9 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 9 | The primary contributions of this paper are as follows:
(1) We propose a novel data contamination detection method for LLMs that is inexpensive and ro- bust. As indicated above, our method combines a âguided instructionâ to complete partial instances randomly drawn from the investigated dataset partition and several heuristics to generalize from instance- to partition-level contamination decisions.
(2) We evaluate our proposed methods in 28 distinct scenarios. These scenarios are created by two state-of-the-art LLMs: GPT-3.5 and GPT-4, and span seven datasets for classiï¬cation, summariza- tion, and natural language inference (NLI) tasks. The rationale behind the 28 scenarios is that for each dataset, we separately explore potential data contamination in the train and test splits (or the validation set, in cases where the labeled test set is not publicly available). Our evaluation indicates that our best method is the one that uses guided instruction to complete partial instances, and the one that evaluates these completions by the GPT-4 few-shot ICL classiï¬er, achieving 92%â100% accuracy compared to contamination labels assigned by human experts for dataset partitions. | 2308.08493#9 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08285 | 10 | In our study, we use Alpaca-LLaMA (Wang et al. 2023) and tk-Instruct (Wang et al. 2022b) with different parameter sizes for query generation. We conduct the experiments on the large-scale MS-MARCO (Nguyen et al. 2016) datasets and test on the in-domain MS-MARCO passage retrieval task, TREC-DL 2019 & 2020 (Craswell et al. 2020, 2021) and the out-of-domain BEIR (Thakur et al. 2021) task. Sev- eral benefits are observed in our studies. 1) LLMs can gen- erate a large number of high-quality queries based on the world knowledge of LLM itself, which requires no addi- tional human labeling and is suitable for scenarios lack- ing in manually annotated data. 2) Contrastive pre-training with LLM-generated queries has stronger in-domain zero- shot retrieval performance and on-par performance with the state-of-the-art (SOTA) methods after full fine-tuning. It also shows better domain adaption abilities in out-of-domain BEIR datasets. 3) Bottlenecked query generation shows bet- ter initialization abilities after full fine-tuning. 4) | 2308.08285#10 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08155 | 11 | ConversableAgent Unified Conversation Interfaces: + send + receive + generate_reply Agent Customization: human_input_mode = âNEVERâ code_execution_config = False DEFAULT_SYSTEM_MESSAGE = âYou are a helpful AI assistant. a human_input_mode = âNEVERâ In the following cases, suggest _ =| Y = AutoGen python code..â = human input_rjode = âALWAYSâ pareup_chat -@Weoee! Agents r= 1 el | @2 | 1 1 - 1 1 1 '@} 0 AssistantAgent UserProxyAgent GroupChatManager 1.2 Register a Custom Reply Func: 1.1 Define Agents: # This func will be invoked in po2esrn generate_reply 1 A.register_reply(B, 1 func is registered, a reply_func_A2B) 0 1 list of de ly def reply_func_A2B(msg) 1 functions will be used ouput = input_from_humanC) Developer i i . User Proxy A Assistant B Code c 1 ncludes code 2 Initiate Conversations: 0 A.initiate_chat(âPlot a chart of META and return output TESLA stock price change YTD.â, | 2308.08155#11 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 11 | # 2 RELATED WORK
Despite its importance, the topic of data contamination is not as thoroughly examined as its closely related ï¬eld, data memorization (Carlini et al. 2023; Kandpal et al. 2022; Carlini et al. 2021; Razeghi et al. 2022). Among the limited investigations focusing speciï¬cally on data contamination in LLMs, we ï¬nd notable examples in Radford et al. (2019) and Brown et al. (2020) on GPT-2 and GPT-3, respectively. They used high-order n-grams (e.g., 13-gram) to detect overlapping content between the pre-training data and the evaluation dataset. Most research subsequent to Brown et al. (2020) adopted similar methods for detecting data contamination (Touvron et al. 2023b; Du et al. 2022; Chowdhery et al. 2022; Wei et al. 2021), and most recently, substring matching for GPT- 4 (OpenAI 2023). However, the scope of existing research has been predominantly conï¬ned to model providers, and it encounters speciï¬c limitations, particularly when applied to closed-source
2 | 2308.08493#11 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 12 | 2 Initiate Conversations: 0 A.initiate_chat(âPlot a chart of META and return output TESLA stock price change YTD.â, B) Fi] â , The Resulting Automated Agent Chat: Conversation-Driven . Control Flow Plot a chart of META and foci Se generate_reply TESLA stock price change YTD. necetve Execute the following , Send Program ae aieâ LUD sesso Execution Error: package yfinance is not » installed generate_reply Conversation-Centric Sorry! Please first pip install | Computation yfinance and then execute | 2308.08155#12 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 12 | Our contributions are summarized as follows.
We systematically study the potential of incorporating LLMs into the pre-training stage of dense passage re- trieval, suitable for the scarcity of human-annotated data. ⢠We find stronger zero-shot and fine-tuned performances with contrastive learning and good initialization abilities with bottlenecked query generation pre-training.
⢠We design a two-stage curriculum learning strategy that greatly reduces the usage of LLM-expanded queries while keeping the minor performance degeneration.
Methodology In this section, we first introduce the definition of dense pas- sage retrieval. Then we introduce our method for LLM query generation, the detailed pre-training designs of contrastive learning and bottlenecked query generation, and the two- stage curriculum learning strategy for extended analyses. | 2308.08285#12 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 12 | 2
LLMs. These limitations primarily involve the need for access to pre-training data (Brown et al. 2020; Du et al. 2022; Wei et al. 2021), the requirement for substantial computational resources (Touvron et al. 2023b), or the need for extensive manual labor (Chowdhery et al. 2022). Our ap- proach aims to overcome these hurdles, enabling the assessment of data contamination in scenarios where the pre-training data is either not openly accessible or when signiï¬cant computational hard- ware is not available despite having access to the pre-training data. | 2308.08493#12 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 13 | Figure 2: Illustration of how to use AutoGen to program a multi-agent conversation. The top sub- figure illustrates the built-in agents provided by AutoGen, which have unified conversation interfaces and can be customized. The middle sub-figure shows an example of using AutoGen to develop a two-agent system with a custom reply function. The bottom sub-figure illustrates the resulting automated agent chat from the two-agent system during program execution.
By allowing custom agents that can converse with each other, conversable agents in AutoGen serve as a useful building block. However, to develop applications where agents make meaningful progress on tasks, developers also need to be able to specify and mold these multi-agent conversations.
# 2.2 Conversation Programming | 2308.08155#13 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 13 | Preliminaries Given a query q and a set of passages Pn, the passage re- trieval task aims to find the relevant passages based on the similarity search. Dense passage retrieval utilizes an encoder model Enc, e.g., a Transformers-based model like BERT (Devlin et al. 2019), to yield the sentence representations and measure query-passage similarities through inner prod- uct or cosine distance. Formally, given a query q and a pas- sage q, we can use a query encoder Encq and a passage encoder Encp to derive their corresponding sentence repre- sentations, i.e., vq and vp from the encoder hidden states of the last layer at CLS position h . Then the similarity
a) LLM Query Generation 2) Beene] CE) = & I] | MLM Loss | Roe eS Passage Model Generation Pre-training | Contrastive 1 -â <ââ_ Loss B SESE) SESE) Passage 00 00 OO 009 00 0a Large Language LLM Generated Query (GUS) | MLM Loss | f+ Encoder Encoder sha Encoder Decoder (ese EEE Bee Passage 00 00 oO 00a 00 Oa
{
{
{
| | 2308.08285#13 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 13 | Our paper is closest in spirit to the work of Sainz et al. (2023), who also detected contamination when access to the pre-training data is not available. This effort prompted ChatGPT, particularly when GPT-3.5 is its base model, to generate the ï¬rst instances from different dataset partitions. The underlying assumption here is that if an LLM can reproduce dataset instances, it must have been trained using that particular split. However, our research shows that this method can be unreliable and subject to failure. Such failures can result either from the sparsity introduced by the request to reproduce the ï¬rst instances of a dataset split or from the inability to bypass the safety ï¬lters set by the model provider when the model is asked to generate copyrighted content like dataset instances. Throughout this paper, we refer to this approach as âChatGPT-Cheat?,â taking inspiration from the title of the referenced blog post.
# 3 APPROACH | 2308.08493#13 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 14 | # 2.2 Conversation Programming
As a solution to the above problem, AutoGen utilizes conversation programming, a paradigm that considers two concepts: the first is computation â the actions agents take to compute their response in a multi-agent conversation. And the second is control flow â the sequence (or conditions) un- der which these computations happen. As we will show in the applications section, the ability to program these helps implement many flexible multi-agent conversation patterns. In AutoGen, these computations are conversation-centric. An agent takes actions relevant to the conversations it is involved in and its actions result in message passing for consequent conversations (unless a termina- tion condition is satisfied). Similarly, control flow is conversation-driven â the participating agentsâ decisions on which agents to send messages to and the procedure of computation are functions of the inter-agent conversation. This paradigm helps one to reason intuitively about a complex workflow as agent action taking and conversation message-passing between agents. | 2308.08155#14 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 14 | {
{
{
|
Figure 2: Pre-training with LLM-based document expansion for dense passage retrieval. a) We utilize large language models (LLMs) to generate pseudo-queries with zero-shot or few-shot prompts. b) Bottlenecked query generation pre-training appends an auxiliary Transformers decoder to the encoder. Besides the Masked Language Modelling (MLM) loss of the encoder, we connect the encoder-decoder with merely the bottlenecked representation, i.e., the hidden states of [CLS] token, and make the decoder generate whole LLM-expanded queries with the Cross-Entropy (CE) loss. c) Contrastive pre-training pulls together the representations of the passage and LLM-expanded queries and pushes away in-batch negatives. To minimize reliance on LLM expansions, we implement a two-stage curriculum learning strategy. It first utilizes randomly sampled passages to fully initialize the encoders. And then we can use a relatively small amount of LLM-expanded queries in the second phase.
between q and p, i.e., Sim(q, p), can be calculated as the inner product of vq and vp for simplicity as follows.
rate the LLM-generated queries into the dense model pre- training.
Sim(q, p) = Encq(q) · Encp(p) = vT q vp (1) | 2308.08285#14 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 14 | # 3 APPROACH
In our approach, we operate under two core assumptions: (1) lacking direct access to the pre-training data of the LLMs, and (2) having limited computational resources. Given these premises, our detec- tion strategy for data contamination is anchored by two pivotal insights. First, we examine individual instances within a dataset partition to spot contamination at the instance level. Second, given that LLMs are pre-trained on large-scale data, detecting contaminated instances can act as a signal of broader contamination. As a result, the associated partition can be labeled as being leaked to the LLMâs pre-training data.
To discern contamination at the instance level, we focus on replicating instances by the LLM. In this context, exact replicas of instances serve as red ï¬ags for contamination in the corresponding partition. Note that, due to the inherent probabilistic behavior of LLMs, achieving perfect replicas is not always possible even when contamination is certain. Nevertheless, instances that are closely replicated have a twofold function: while they can offer insightful indications of potential contam- ination, the fact that many datasets draw from web-based sources implies that partial replicas can also arise by happenstance. This overlap introduces uncertainty in drawing a deï¬nitive conclusion about the underlying partition. Thus, it is essential to check for consistent and signiï¬cant signs of contamination within the partition. | 2308.08493#14 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 15 | Figure 2 provides a simple illustration. The bottom sub-figure shows how individual agents perform their role-specific, conversation-centric computations to generate responses (e.g., via LLM inference calls and code execution). The task progresses through conversations displayed in the dialog box. The middle sub-figure demonstrates a conversation-based control flow. When the assistant receives a message, the user proxy agent typically sends the human input as a reply. If there is no input, it executes any code in the assistantâs message instead.
4
AutoGen features the following design patterns to facilitate conversation programming: | 2308.08155#15 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 15 | Sim(q, p) = Encq(q) · Encp(p) = vT q vp (1)
The key to improving retrieval performances is to yield stronger representations vq, vp with better context align- ment. The representations can be regarded as the compres- sion of full contexts. We believe that incorporating the strong context-generation abilities of LLMs into the pre-training stage with carefully designed pre-tasks can be a new way for improving such alignment.
Bottlenecked Query Generation Pre-training Bottlenecked pre-training trains a monomeric encoder (Enc) with good initialization abilities for subsequent fine- tuning. Given a tokenized sentence t â T from the training corpus, we randomly select a certain ratio of tokens, with the corresponding indices denoted as M , and replace them with mask tokens [m]:
mask(t) = {[CLS], t1, t2, [m], t4, ..., tn, [SEP]} (2)
# LLM Query Generation
Cross-Entropy (CE) loss is then used to optimize as Masked Language Model (MLM) loss for the encoder. | 2308.08285#15 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 15 | In the following sections, we ï¬rst elaborate on our method and the necessary components to compel LLM into reproducing dataset instances. We then delve into the procedure for evaluating the con- tamination status of existing LLMs for an entire partition based on these instances. Furthermore, leveraging the ï¬ne-tuning option offered by OpenAI for the GPT-3.5 base model, we undertake a study in which we intentionally contaminate the GPT-3.5 base model with partitions that our method detected as uncontaminated. Subsequently, we subject the contaminated GPT-3.5 to our technique, further showcasing our methodâs effectiveness in pinpointing data contamination within LLMs.
3.1 DETECTING INSTANCE-LEVEL CONTAMINATION
3.1.1 COMPONENTS TO MEASURE INSTANCE-LEVEL CONTAMINATION | 2308.08493#15 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 16 | 1. Unified interfaces and auto-reply mechanisms for automated agent chat. Agents in AutoGen have unified conversation interfaces for performing the corresponding conversation- centric computation, including a send/receive function for sending/receiving messages and a generate reply function for taking actions and generating a response based on the received message. AutoGen also introduces and by default adopts an agent auto-reply mechanism to realize conversation-driven control: Once an agent receives a message from another agent, it au- tomatically invokes generate reply and sends the reply back to the sender unless a termination condition is satisfied. AutoGen provides built-in reply functions based on LLM inference, code or function execution, or human input. One can also register custom reply functions to customize the behavior pattern of an agent, e.g., chatting with another agent before replying to the sender agent. Under this mechanism, once the reply functions are registered, and the conversation is initialized, the conversation flow is naturally induced, and thus the agent conversation proceeds naturally without any extra control plane, i.e., a special module that controls the conversation flow. For example, with the developer code in the blue-shaded area | 2308.08155#16 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 16 | # LLM Query Generation
Cross-Entropy (CE) loss is then used to optimize as Masked Language Model (MLM) loss for the encoder.
Given a passage p, we use a zero-shot prompt for Alpaca- LLaMA and a few-shot prompt for tk-Instruct to expand queries, as illustrated in Figure 1. We empirically find that Alpaca 7B and 13B models work well on the zero-shot prompt, which helps save computation budgets. We man- ually write a few examples for tk-Instruct, as we find that few-shot prompts make its query generation more stable.
LLM-based document expansion enriches the pre-training corpus with additional contextual information. Instead of di- rectly appending the expanded queries onto the passage, we seek to incorporate them into our pre-training stage for bet- ter initialization of end-to-end retrievers. Our work only in- volves LLM inference at the pre-training stage, but not the retrieval stage like traditional query or document expansion works. Two pre-training paradigms are involved to incorpoLenc = â log p(ti|Enc(mask(t))) tâT iâM (3)
where ti is groundtruth tokens w.r.t corresponding mask to- kens [m]. | 2308.08285#16 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 16 | 3.1 DETECTING INSTANCE-LEVEL CONTAMINATION
3.1.1 COMPONENTS TO MEASURE INSTANCE-LEVEL CONTAMINATION
To gauge instance-level contamination, we utilize two distinct methods: the ï¬rst leverages BLEURT and ROUGE-L scores, while the second draws on few-shot ICL prompting with GPT-4. While each of these methods requires speciï¬c components to be employed, the ï¬rst two components are shared between both. The third component, the general instruction, is exclusive to the ï¬rst method. For both methods, we begin our process by steering the LLM towards the (potentially contaminated) dataset partition using guided instruction that integrates the dataset name and the partition of interest. Next, we provide the random-length initial segment of a randomly selected instance and its label if it is available. The LLM is then instructed to complete it. For the ï¬rst method, we repeat this step using general instruction that omits the dataset and partition name. An example of a guided versus a general instruction is depicted in Figure 1. We detail all the required components below.
3 | 2308.08493#16 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 17 | naturally without any extra control plane, i.e., a special module that controls the conversation flow. For example, with the developer code in the blue-shaded area (marked âDeveloper Codeâ) of Figure 2, one can readily trigger the conversation among the agents, and the conversation would proceed automatically, as shown in the dialog box in the grey shaded area (marked âPro- gram Executionâ) of Figure 2. The auto-reply mechanism provides a decentralized, modular, and unified way to define the workflow. | 2308.08155#17 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 17 | where ti is groundtruth tokens w.r.t corresponding mask to- kens [m].
A single-layer accessional Transformers decoder (Dec) is further introduced, which receives the input from the con- catenation of the encoder representation h and con- textual texts x, e.g., LLM-generated queries.
Tctx = {h [CLS] last , x1, ..., xN , [SEP]} (4)
Then the decoder uses the Casual Language Model (CLM) loss to generate the whole input context with the as- sistance of encoder representation. | 2308.08285#17 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 17 | 3
Instruction: You are provided with Sentence 1 from the validation split of the WNLI dataset. Finish Sen- tence 2 as appeared in the dataset. Sentence 2 must exactly match the instance in the dataset. Sentence 1: The dog chased the cat, which ran up a tree. It waited at the top. Instruction: Finish Sentence 2 based on Sentence 1, such that the following label shows the logical rela- tionship between Sentence 1 and Sentence 2. Sentence 1: The dog chased the cat, which ran up a tree. It waited at the top. Label: 1 (entailment) Label: 1 (entailment) Sentence 2: Sentence 2: The cat waited at the top. The cat was at the top of the tree after being chased by the dog.
Figure 1: An example of a guided (left) and general (right) instruction employed for a paired- instance dataset. In this example, using GPT-4, the guided instruction results in an exact match, whereas the general instruction does not. | 2308.08493#17 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 18 | 2. Control by fusion of programming and natural language. AutoGen allows the usage of programming and natural language in various control flow management patterns: 1) Natural- language control via LLMs. In AutoGen, one can control the conversation flow by prompting the LLM-backed agents with natural language. For instance, the default system message of the built-in AssistantAgent in AutoGen uses natural language to instruct the agent to fix errors and generate code again if the previous result indicates there are errors. It also guides the agent to confine the LLM output to certain structures, making it easier for other tool-backed agents to consume. For example, instructing the agent to reply with âTERMINATEâ when all tasks are completed to terminate the program. More concrete examples of natural language controls can be found in Appendix C. 2) Programming-language control. In AutoGen, Python code can be used to specify the termination condition, human input mode, and tool execution logic, e.g., the max number of auto replies. One can also register programmed auto-reply functions to control the conversation flow with Python code, as shown in the code block identified as âConversation- Driven Control Flowâ in Figure 2. 3) Control transition between natural and programming language. | 2308.08155#18 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 18 | Model / Zero-shot Evaluation MS-MARCO MRR@10 Recall@50 Recall@1k TREC DL 19 TREC DL 20 nDCG@10 nDCG@10 BM25 SimCSE (Gao, Yao, and Chen 2021)â coCondenser (Gao and Callan 2022)â Contriever (Izacard et al. 2021)â 18.7 8.7 7.5 16.8 59.2 33.7 31.3 60.8 85.7 64.6 58.1 89.1 51.2 24.5 22.1 44.5 47.7 17.9 20.7 43.2 Contrastive Pre-training Baseline + tk-inst 3b queries + Alpaca 7b queries + Alpaca 13b queries 12.5 20.9+8.4 22.6+10.1 22.7+10.2 49.0 70.2+21.2 70.7+21.7 71.7+22.7 82.3 92.8+10.5 93.8+11.5 94.3+12.0 36.0 47.0+11.0 51.0+15.0 53.9+17.9 38.4 48.6+10.2 48.9+10.5 50.1+11.7 | 2308.08285#18 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 18 | (1) Guided InstructionâA Means to Navigate the LLMâs Domain. By employing instruction- tuning on top of causal language modeling (CLM; Vaswani et al. (2017); Radford et al. (2018)), LLMs can be guided by human directives (Wei et al. 2022; Sanh et al. 2022; Chung et al. 2022). This serves as a tool for controlling the LLMâs domain using natural language. Thus, we form guided instruction such that it incorporates the dataset and split name in the input prompt, thereby directing the LLM towards the underlying dataset split. A comprehensive list of all the instructions used in this study for different tasks/datasets can be found in Table 5 in Appendix A. | 2308.08493#18 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 19 | with Python code, as shown in the code block identified as âConversation- Driven Control Flowâ in Figure 2. 3) Control transition between natural and programming language. AutoGen also supports flexible control transition between natural and programming language. One can achieve transition from code to natural-language control by invoking an LLM inference containing certain control logic in a customized reply function; or transition from nat- ural language to code control via LLM-proposed function calls (Eleti et al., 2023). | 2308.08155#19 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 19 | Table 1: Zero-shot evaluation of contrastive pre-training with LLM-based document expansion. â denotes our reproduced re- sults. The best scores are marked in bold. Results with the increment over the corresponding baseline have been tested with two-tailed t-tests, demonstrating statistically significant improvements ( p-value ⤠0.01 ).
Lice=- SY) logp(xi|Dec(x[:i-1])) 6) ei CT ete
The final loss L is then formulated as follows.
L = Lenc + Ldec (6) Through the bottlenecked encoder-decoder structure, we seek to compress the context signal from LLM-generated queries into the encoder representations and give strong ini- tialization ability to the encoder. | 2308.08285#19 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 19 | (2) Unraveling Data History with Causal Language Modeling. Primarily, data contamination occurs during the CLM pre-training phase since it constitutes the largest part of training in LLMs and utilizes web data. Without instruction tuning, an LLM only attempts to complete an input prompt based on data seen during the CLM pre-training phase (Ouyang et al. 2022). Notable models that exhibit this behavior include GPT-2 and GPT-3. We, therefore, employ the next token prediction mechanism to trace data history. In particular, we feed the model the variable-length initial segment of a dataset instance, chosen randomly from a particular split, prompting it to ï¬nish the partial instance. For labeled instances, we integrate the corresponding labels in the input prompt. This reï¬ects that if an instance was ingested during the LLMâs pre-training, its label was ingested too.
For paired-instance datasets, we present the model with the initial sentence and its corresponding label. In the case of single-instance datasets, instances with multiple sentences are arbitrarily cut at the end of a complete sentence, whereas for instances containing a single (long) sentence, a random sentence fragment is eliminated. Finally, the LLM is tasked with ï¬nishing the provided initial part. Figure 1 shows this process for a paired-instance dataset. | 2308.08493#19 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 20 | In the conversation programming paradigm, one can realize multi-agent conversations of diverse patterns. In addition to static conversation with predefined flow, AutoGen also supports dynamic conversation flows with multiple agents. AutoGen provides two general ways to achieve this: 1) Customized generate reply function: within the customized generate reply function, one agent can hold the current conversation while invoking conversations with other agents depending on the content of the current message and context. 2) Function calls: In this approach, LLM decides whether or not to call a particular function depending on the conversation status. By messaging additional agents in the called functions, the LLM can drive dynamic multi-agent conversation. In addition, AutoGen supports more complex dynamic group chat via built-in GroupChatManager, which can dynamically select the next speaker and then broadcast its response to other agents. We elaborate on this feature and its application in Section 3. We provide implemented working systems to showcase all these different patterns, with some of them visualized in Figure 3.
# 3 Applications of AutoGen | 2308.08155#20 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 20 | Contrastive Pre-training For reproduction and fair comparison, we adapt the con- trastive pre-training architecture from coCondenser (Gao and Callan 2022). The passage p and its sampled or gener- ated context pctx are directly forwarded through the encoder Enc. Besides the MLM loss Lenc of the encoder, an extra Transformers decoder Decext is also introduced for repre- sentation pre-training, which takes the concatenation of the [CLS] encoder representation h and encoder hidden states last hi l from l-th layer. Then a cross-entropy loss is used for the decoderâs pre-task.
Leat = â > > log p(ti| Decen(htCPS! shi, --,hi)) teT ieM 7)
(7) Differently, for pre-training with LLM-expanded queries, assuming vp and vctx denote encodersâ representations, a contrastive loss with in-batch negatives is used as follows.
exp(Up - Udn) exp(Up * Udin) + 5 exP(Up * Vax) Lor = â log (8)
where v+ corresponding to p. And vâ the context texts of the other passages in the batch. | 2308.08285#20 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 20 | Therefore, once a contaminated LLM is prompted with guided instruction, its output should mirror the subsequent segment of the reference instance under the guidance of the dataset and split name.
(3) General InstructionâAn Alternative Facet of Causal Language Modeling. We formulate the general instruction to measure the impact of the guidance given in the guided instruction. This general instruction only requests the completion of the partial instance without specifying the dataset or its partition. As a result, when using this instruction, the generated sequence solely relies on the CLM pre-training phase, akin to autoregressive models without instruction tuning. This enables us to establish a baseline for generated random replicas and assess how much the guided instruction inï¬uences the LLM-generated part of the input partial instance. We assess this inï¬uence in terms of overlap, semantics, and structural similarity with the reference instance. This analysis is crucial as even when the output of LLM does not perfectly match the reference instance, it still enables us to detect potential signs of contamination.
3.1.2 MEASURING INSTANCE-LEVEL CONTAMINATION
We introduce two methods for measuring contamination at the instance level:
BLEURT & ROUGE-L: To quantify the overlap between the completionsâproduced under both guided and general instructionsâand reference instances, we employ two metrics: ROUGE-L (Lin
4 | 2308.08493#20 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 21 | # 3 Applications of AutoGen
We demonstrate six applications using AutoGen (see Figure 3) to illustrate its potential in simplify- ing the development of high-performance multi-agent applications. These applications are selected based on their real-world relevance (A1, A2, A4, A5, A6), problem difficulty and solving capabil- ities enabled by AutoGen (A1, A2, A3, A4), and innovative potential (A5, A6). Together, these criteria showcase AutoGenâs role in advancing the LLM-application landscape.
5
Retrieval-augmented Retrieval-augmented Assistant A3. ALF Chat A1. Math Problem Solving Commande! Chess Board i Human/Al Chess Human/Al Chess Writer Player A Player B A4. Multi-agent Coding A5. Dynamic Group Chat A6. Conversational Chess
Figure 3: Six examples of diverse applications built using AutoGen. Their conversation patterns show AutoGenâs flexibility and power.
# A1: Math Problem Solving | 2308.08155#21 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08155 | 22 | Figure 3: Six examples of diverse applications built using AutoGen. Their conversation patterns show AutoGenâs flexibility and power.
# A1: Math Problem Solving
Mathematics is a foundational discipline and the promise of leveraging LLMs to assist with math problem solving opens up a new plethora of applications and avenues for exploration, including per- sonalized AI tutoring, AI research assistance, etc. This section demonstrates how AutoGen can help develop LLM applications for math problem solving, showcasing strong performance and flexibility in supporting various problem-solving paradigms. | 2308.08155#22 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 22 | Curriculum Learning As discussed before, LLM-based document expansion faces the challenge of costly inference due to large numbers of documents or passages. Since we intend to pre-train our model with enriched contexts, inspired by the wisdom of curriculum learning (Bengio et al. 2009), we consider 1) a randomly cropped passage span as a coarse-grained context, while 2) the LLM-expanded queries as fine-grained con- text, as depicted in Figure 2. Following the span corruption strategies in the seed-encoder (Lu et al. 2021) and coCon- denser (Gao and Callan 2022), we use the coarse-grained context as the passage itself in the bottlenecked generation pre-training, and the randomly sampled passage span in con- trastive pre-training. As we focus on LLM-based document expansion, other span corruption strategies (Wu et al. 2023a) are left to our future work. After pre-training on a large amount of randomly cropped contexts, we initialize from the first stage and then use the fine-grained LLM-expanded queries for the second-phrase pre-training. Experiments find that this curriculum strategy greatly reduces the need for LLM inferences on MS-MARCO passages, while still main- taining similar retrieval performances. | 2308.08285#22 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 22 | GPT-4 Evaluation: While both BLEURT and ROUGE-L quantify the overlap between the gen- erated and reference instances, they fall short of pinpointing near-exact matches. To bridge this gap, we adopt few-shot ICL prompting (Brown et al. 2020) to dictate the detection of exact/near- exact matches based on human judgments (see Section 4: Human Evaluation for our deï¬nition of a near-exact match). Speciï¬cally, this method includes a few representative examples of exact and near-exact matchesâsourced from human evaluationsâin the prompt, which are used to assess all other generated completions. We chose GPT-4 for this task as it requires no specialized prompting technique (Bubeck et al. 2023), enhancing the reliability of its results. A visual representation of the few-shot ICL prompt used in our study can be seen in Figure 3 in Appendix B. Also, detailed ex- amples, including their ROUGE-L and BLEURT scores, as well as both human and GPT-4 few-shot ICL evaluations, are listed in Table 6 in Appendix C.
3.2 DETECTING PARTITION-LEVEL CONTAMINATION | 2308.08493#22 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 23 | (Scenario 1) We are able to build a system for autonomous math problem solving by directly reusing two built-in agents from AutoGen. We evaluate our system and several alternative approaches, including open-source methods such as Multi-Agent Debate (Liang et al., 2023), LangChain Re- Act (LangChain, 2023), vanilla GPT-4, and commercial products ChatGPT + Code Interpreter, and ChatGPT + Plugin (Wolfram Alpha), on the MATH (Hendrycks et al., 2021) dataset and summarize the results in Figure 4a. We perform evaluations over 120 randomly selected level-5 problems and on the entire5 test dataset from MATH. The results show that the built-in agents from AutoGen al- ready yield better performance out of the box compared to the alternative approaches, even including the commercial ones. (Scenario 2) We also showcase a human-in-the-loop problem-solving process with the help of AutoGen. To incorporate human feedback with AutoGen, one only needs to set human input mode=âALWAYSâ in the UserProxyAgent of the system in scenario 1. We demon- strate that this system can effectively incorporate human inputs to solve challenging problems | 2308.08155#23 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 23 | Zero-shot evaluation and Fine-tuning We conduct the zero-shot evaluation of the contrastive pre-trained encoder without fine-tuning on MS-MARCO, TREC-DL, and BEIR datasets. We conduct fine-tuning on both pre-training schemas to verify their retrieval initializa- tion ability. Following DPR (Karpukhin et al. 2020), a sim- ple contrastive loss is applied to optimize the retriever.
The final optimization objective is the sum of the above losses.
c og exp(q-p*) exp(q: pt) +o exp(q- p~) (10) | 2308.08285#23 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 23 | 3.2 DETECTING PARTITION-LEVEL CONTAMINATION
To generalize from instance-level contamination to partition-level discrete decisions (i.e., the parti- tion is/is not contaminated), we take advantage of two observations:
Idea 1: A dataset is likely to be contaminated if the average overlap score with the reference in- stances (as measured by ROUGE-L and BLEURT) observed with completions from the guided in- struction is signiï¬cantly larger than the one measured with the completions from the general instruc- tion. The motivation behind this idea is that since the only difference between the two instructions is that the guided instruction contains the dataset and partition name as guidance, the improvement can only be explained by contamination. | 2308.08493#23 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 24 | in the UserProxyAgent of the system in scenario 1. We demon- strate that this system can effectively incorporate human inputs to solve challenging problems that cannot be solved without humans. (Scenario 3) We further demonstrate a novel scenario where multiple human users can participate in the conversations during the problem-solving process. Our experiments and case studies for these scenarios show that AutoGen enables better performance or new experience compared to other solutions we experimented with. Due to the page limit, details of the evaluation, including case studies in three scenarios are in Appendix D. | 2308.08155#24 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 24 | Model / Fine-tuned Results MS-MARCO MRR@10 Recall@50 Recall@1k TREC DL 19 TREC DL 20 nDCG@10 nDCG@10 Contriever (Izacard et al. 2021)â Condenser (Gao and Callan 2021) coCondenser (Gao and Callan 2022) SimLM (Wang et al. 2022a) RetroMAE (Liu and Shao 2022) CoT-MAE (Wu et al. 2023a) 33.4 36.6 38.2 39.1 39.3 39.4 85.0 85.4â 86.5â 87.3â 87.0â 87.0 98.4 97.4 98.4 98.6 98.5 98.7 62.8 69.8 71.7â 68.9â 69.1â 70.9â 63.2 66.5â 68.4â 68.8â 70.0â 70.4 Contrastive Pre-training Baseline + tk-instruct 3b queries + Alpaca 7b queries + Alpaca 13b queries 38.8 39.6+0.8 40.0+1.2 39.6+0.8 87.8 88.8+1.0 89.0+1.2 | 2308.08285#24 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 24 | Idea 2: A dataset is likely to be contaminated if GPT-4 using few-shot ICL prompting detects at least one exact match or at least two near-exact matches. The intuition behind this idea is that even a small contaminated part of the sample of instances is likely indicative of a larger dataset partition leak. While the presence of an exact match among replicas generated by LLM is a clear sign of contamination, the approach to handling exact or near-exact matchesâand deciding the number of such matches that indicates broader contaminationâcan be tailored depending on speciï¬c research objectives. In this paper, we intuitively establish the above-mentioned criterion to extrapolate from the instance-level to the partition-level contamination. An empirical validation of our approach is also provided in Section 3.3.
We propose two algorithms, each implementing one of these ideas respectively.
Algorithm 1: A dataset partition is labeled as contaminated if the average overlap score (as provided by BLEURT and ROUGE-L) between the reference instances and generated texts with the guided instruction on a sample of ten instances is statistically signiï¬cantly better than those produced by general instruction under a non-parametric bootstrap resampling test.2 | 2308.08493#24 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 25 | # A2: Retrieval-Augmented Code Generation and Question Answering
Retrieval augmentation has emerged as a practical and effective approach for mitigating the intrinsic limitations of LLMs by incorporating external documents. In this section, we employ AutoGen to build a Retrieval-Augmented Generation (RAG) system (Lewis et al., 2020; Parvez et al., 2021) named Retrieval-augmented Chat. The system consists of two agents: a Retrieval-augmented User Proxy agent and a Retrieval-augmented Assistant agent, both of which are extended from built-in agents from AutoGen. The Retrieval-augmented User Proxy includes a vector database (Chroma,
5We did not evaluate ChatGPT on the whole dataset since it requires substantial manual effort and is re- stricted by its hourly message-number limitation. Multi-agent debate and LangChain ReAct were also not evaluated since they underperformed vanilla GPT-4 on the smaller test set.
6
(a) A1: Performance on MATH (w/ GPT-4). (b) A2: Q&A tasks (w/ GPT-3.5).
# (c) A3: Performance on ALFWorld.
# (d) A4: Performance on OptiGuide. | 2308.08155#25 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 25 | The advantage of this algorithm is that it is non-parametric, i.e., we do not need to decide on an arbitrary threshold on the ROUGE-L or BLEURT scores to indicate contamination. However, its drawback is that even a signiï¬cant increase in overlap may still come from generated instances that a human would not consider an exact or near-exact match. Algorithm 2 addresses this limitation.
Algorithm 2: A dataset partition is labeled as contaminated if GPT-4 with few-shot ICL prompting ï¬ags at least one generated completion as an exact match or a minimum of two completions as near- exact matches within a sample of ten instances. All completions in this setting are generated solely by guided instruction.
We evaluate both these algorithms in Section 5.
2Details of our bootstrap resampling method can be found in Appendix D.
5 | 2308.08493#25 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 26 | # (c) A3: Performance on ALFWorld.
# (d) A4: Performance on OptiGuide.
Figure 4: Performance on four applications A1-A4. (a) shows that AutoGen agents can be used out of the box to achieve the most competitive performance on math problem solving tasks; (b) shows that AutoGen can be used to realize effective retrieval augmentation and realize a novel interactive retrieval feature to boost performance on Q&A tasks; (c) shows that AutoGen can be used to introduce a three-agent system with a grounding agent to improve performance on ALFWorld; (d) shows that a multi-agent design is helpful in boosting performance in coding tasks that need safeguards.
2023) with SentenceTransformers (Reimers & Gurevych, 2019) as the context retriever. A detailed workflow description of the Retrieval-augmented Chat is provided in Appendix D. | 2308.08155#26 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 26 | Table 2: Fine-tuned results of pre-training with LLM-based document expansion. â denotes our reproduced results. The best scores are marked in bold. Results with the increment over the corresponding baseline have been tested with two-tailed t-tests, demonstrating statistically significant improvements ( p-value ⤠0.01 ).
where q is a given query, p+ and pâ are their corresponding positive passage and negative passages respectively.
Experiments This section introduces detailed experiment settings for pre- training and fine-tuning. Then we present the main results.
of pre-training with LLM-generated queries. We use the co- sine scheduler with the same hyper-parameter settings for the first stage, and a constant learning rate for the second stage. All pre-training seeds are set to 42 for reproducibility. The encoders are directly tested on downstream tasks with- out fine-tuning for zero-shot evaluation.
# Pre-training
# Fine-tuning | 2308.08285#26 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 26 | We evaluate both these algorithms in Section 5.
2Details of our bootstrap resampling method can be found in Appendix D.
5
Instruction: You are provided with Sentence 1 from the train split of the RTE dataset. Finish Sentence 2 as ap- peared in the dataset. Sentence 2 must exactly match the instance in the dataset. Sentence 1: Twelve of Jupiterâs moons are relatively small and seem to have been more likely captured than to have been formed in orbit around Jupiter. Label: 0 (not entailment) Sentence 2: âââââââââââââââââââââââââââââââââââââââââââââââ GPT-3.5: The formation of Jupiterâs twelve relatively small moons is more likely
due to their capture rather than being formed in orbit around Jupiter.
âââââââââââââââââââââââââââââââââââââââââââââââ Contaminated GPT-3.5: Jupiter has twelve moons.
Figure 2: An example of an exact match generated by the GPT-3.5 contaminated with the train split of the RTE dataset versus an inexact match generated by the GPT-3.5 base model, both under the same guided instruction. This example is one of the training instances used during contamination. | 2308.08493#26 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 27 | We evaluate Retrieval-augmented Chat in both question-answering and code-generation scenarios. (Scenario 1) We first perform an evaluation regarding natural question answering on the Natural Questions dataset (Kwiatkowski et al., 2019) and report results in Figure 4b. In this evaluation, we compare our system with DPR (Dense Passage Retrieval) following an existing evaluation6 prac- tice (Adlakha et al., 2023). Leveraging the conversational design and natural-language control, AutoGen introduces a novel interactive retrieval feature in this application: whenever the retrieved context does not contain the information, instead of terminating, the LLM-based assistant would reply âSorry, I cannot find any information about... UPDATE CONTEXT.â which will invoke more retrieval attempts. We conduct an ablation study in which we prompt the assistant agent to say âI donât knowâ instead of âUPDATE CONTEXT.â in cases where relevant information is not found, and report results in Figure 4b. The results show that the interactive retrieval mechanism indeed plays a non-trivial role in the process. We give a concrete example and results using this appealing feature in Appendix | 2308.08155#27 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 27 | # Pre-training
# Fine-tuning
Following the pretraining settings in (Gao and Callan 2022), we choose the MS-MARCO dataset (Nguyen et al. 2016) with 3.2M documents as our pre-training corpus. LLMs with different types and parameter sizes, i.e. Alpaca 7B, 13B (Wang et al. 2023), and tk-instruct 3B (Wang et al. 2022b) are used to generate the queries for LLM-based document expansion. Nucleus sampling with topp = 0.95, topk = 50, and temperature = 0.7 is used for LLM generation. | 2308.08285#27 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 27 | Table 1: Results after introducing intentional contamination to the GPT-3.5 base model us- ing guided instruction. A tick (X) indicates the identiï¬cation of at least one exact replica from the training instances used for contamination by our top-performing method (Alg. 2: GPT-4 ICL) and human evaluation.
Table 2: Results of identifying contamination of GSM8k dataset within GPT-4 when guided instruction is used. A double tick (XX) sig- nals the identiï¬cation of two or more near- exact replicas from the train split of this dataset by our top-performing method (Alg. 2: GPT-4 ICL) and human evaluation. | 2308.08493#27 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 28 | that the interactive retrieval mechanism indeed plays a non-trivial role in the process. We give a concrete example and results using this appealing feature in Appendix D. (Scenario 2) We further demonstrate how Retrieval-augmented Chat aids in generating code based on a given codebase that contains code not included in GPT-4âs training data. Evaluation and demonstration details for both scenarios are included in Appendix D. | 2308.08155#28 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 28 | For bottlenecked query generation pre-training, the en- coder is initialized from the 12-layer BERT-base model (De- vlin et al. 2019), while the single-layer decoder is randomly initialized from scratch. We use the AdamW optimizer with a learning rate of 3e-4, batch size of 2048, total steps of 80k, and a warmup ratio of 0.1. The pre-training uses 8 Tesla A100 GPUs and trains for 19 hours. For contrastive pre-training, we adapt the codes and architecture from (Gao and Callan 2022) and initialize from (Gao and Callan 2021) by following their settings. We use a learning rate of 1e-4, batch size of 2048, and total steps of 120k and keep other hyper-parameters the same as above for training 50 hours. For curriculum learning, 75% of the total steps are used for the first stage of pre-training with sampled spans, and the remaining 25% of the steps are used for the second stage | 2308.08285#28 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08155 | 29 | 6The results of DPR with GPT-3.5 shown in Figure 4b are from (Adlakha et al., 2023). We use GPT-3.5 as a shorthand for GPT-3.5-turbo.
7
# A3: Decision Making in Text World Environments
In this subsection, we demonstrate how AutoGen can be used to develop effective applications that involve interactive or online decision making. We perform the study using the ALFWorld (Shridhar et al., 2021) benchmark, which includes a diverse collection of synthetic language-based interactive decision-making tasks in household environments. | 2308.08155#29 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 29 | The encoder is fine-tuned and tested on MS-MARCO Pas- sage Ranking task (Nguyen et al. 2016), TREC Deep Learn- ing (DL) 2019 (Craswell et al. 2020) and 2020 (Craswell et al. 2021). MS-MARCO Passage Ranking dataset con- tains 8.8 million passages and 500k human annotated query- passage pairs. Following (Gao and Callan 2021), we re- port the performance metrics on MRR@10, Recall@50, Re- call@1K, and evaluate the models on its development set with 6,980 queries, because its test set is not publicly avail- able. TREC-DL 2019 and 2020 test sets both contain 200 an- notated queries. We adopt the Tevatron pipeline (Gao et al. 2022) with the AdamW optimizer for a learning rate of 2e-5, a batch size of 8, negative samples per passage of 15, a neg- ative depth of 200, and trains for 3 epochs. The performance metrics of TREC and BEIR are reported on NDCG@10.
# Baselines | 2308.08285#29 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08155 | 30 | With AutoGen, we implemented a two-agent system to solve tasks from ALFWorld. It consists of an LLM-backed assistant agent responsible for suggesting plans to conduct a task and an executor agent responsible for executing actions in the ALFWorld environments. This system integrates Re- Act prompting (Yao et al., 2022), and is able to achieve similar performance. A common challenge encountered in both ReAct and the AutoGen-based two-agent system is their occasional inability to leverage basic commonsense knowledge about the physical world. This deficiency can lead to the system getting stuck in a loop due to repetitive errors. Fortunately, the modular design of AutoGen allows us to address this issue effectively: With AutoGen, we are able to introduce a grounding agent, which supplies crucial commonsense knowledgeâsuch as âYou must find and take the object before you can examine it. You must go to where the target object is before you can use it.ââwhenever the system exhibits early signs of recurring errors. It significantly enhances the systemâs ability to avoid getting entangled in error loops. We compare the task-solving performance of the two variants of our system with GPT-3.5-turbo and ReAct7 on the | 2308.08155#30 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 30 | To validate our choice for the hyperparameters used in Algorithm 2, i.e., the number of exact/near- exact matches to declare contamination, we performed a controlled study in which an LLM is con- taminated on purpose with several datasets. To this end, we used the GPT-3.5 base model and a subset of the train partition of the following datasets (one dataset from each task in question): AG News, RTE, and XSum. Note that all these partitions were marked as uncontaminated for GPT-3.5 by the human evaluators (see Table 4 and Section 4: Human Evaluation). To mimic the LLMâs pre-training on web data, we retained only minimal metadata about the datasets as they appear on the web when scraped. In particular, we used: the dataset title, the partition name, and the entire instance.3 Following training, we evaluate the generated completions by our best-performing tech- nique (Algorithm 2: GPT-4 ICL) (see Table 3). Figure 2 visualizes the generated replicas before and after contamination in one of our experiments when guided instruction is utilized.4 In addition, Table 1 summarizes our ï¬ndings | 2308.08493#30 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 31 | entangled in error loops. We compare the task-solving performance of the two variants of our system with GPT-3.5-turbo and ReAct7 on the 134 unseen tasks from ALFWorld and report results in Figure 4c. The results show that introducing a grounding agent could bring in a 15% performance gain on average. Upon examining the systemsâ outputs, we observe that the grounding agent, by delivering background commonsense knowledge at the right junctures, significantly miti- gated the tendency of the system to persist with a flawed plan, thereby avoiding the creation of error loops. For an example trajectory comparing the systems see Appendix D, Figure 10. | 2308.08155#31 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 31 | Results / nDCG@10 BM25 TREC-COVID NFCorpus 65.6 32.5 NQ HotpotQA FiQA-2018 32.9 60.3 23.6 ArguAna Touch´e-2020 31.5 36.7 CQADupStack Quora 29.9 78.9 DBPedia 31.3 SCIDOCS 15.8 FEVER Climate-FEVER SciFact 75.3 21.3 66.5 coCondenser Contriever 21.2 13.7 27.3 31.7 10.7 22.3 7.2 25.4 48.1 24.5 34.4 5.8 37.9 16.7 10.5 71.3 28.4 83.5 16.3 29.2 4.6 14.9 16.8 6.4 43.2 68.2 15.5 64.9 SimCSE Baseline 27.5 10.5 16.2 29.9 16.3 23.8 9.7 9.3 24.2 19.6 28.0 13.4 35.8 8.1 13.5 73.7 18.2 75.8 16.7 22.5 6.1 10.4 29.2 14.2 25.0 43.6 8.5 52.7 + tk-Instruct 3b 36.8+20.6 | 2308.08285#31 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 31 | the generated replicas before and after contamination in one of our experiments when guided instruction is utilized.4 In addition, Table 1 summarizes our ï¬ndings from this study. The key conclusion of this experiment is that the contaminated LLM generated at least one exact match in each setting. This underscores that the replication of even one exact match stands as a robust and undeniable indicator of contamination.5 | 2308.08493#31 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08285 | 32 | 22.5 6.1 10.4 29.2 14.2 25.0 43.6 8.5 52.7 + tk-Instruct 3b 36.8+20.6 33.1+3.2 34.3+25.0 56.2+32.0 29.8+10.3 44.6+8.8 16.3+8.2 30.9+12.8 83.8+8.0 30.2+7.7 13.6+3.2 61.9+18.3 18.4+9.8 64.4+11.7 39.6+12.8 + Alpaca 7b 52.3+36.1 30.9+1.0 31.8+22.5 51.5+27.3 27.2+7.6 40.5+4.8 13.7+5.5 32.4+14.2 83.3+7.5 28.8+6.3 13.5+3.2 67.2+23.6 13.8+5.3 60.8+8.1 39.1+12.4 + Alpaca 13b 54.7+38.5 33.5+3.5 31.9+22.6 51.8+27.6 28.6+9.0 40.6+4.9 | 2308.08285#32 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 32 | As a second experiment, we employed GPT-4 and the GSM8k dataset (Cobbe et al. 2021). This choice was motivated by OpenAIâs technical report on GPT-4, which indicates contamination from its train split (OpenAI 2023). Given that this dataset comprises mathematical problems, our ob- jective is to replicate the questions in the dataset while withholding their corresponding answers.6 Table 2 reports our results from this experiment. Our results highlight that contamination is not solely identiï¬ed through exact matches; near-exact matches are also indicative. To account for the probabilistic nature of LLMs, we set a threshold of two for the minimum number of near-exact matches to indicate contamination. As shown, this is supported by the data.
3All data formats used for the contamination of GPT-3.5 are detailed in Table 9 in Appendix E. 4Further examples are provided in Table 10 in Appendix F. 5Details on the continued training of the GPT-3.5 base model is presented in Appendix E. 6An example of this replication process is provided in Table 10 in Appendix F.
6
# 4 EXPERIMENTAL SETUP | 2308.08493#32 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 33 | In this subsection, we use AutoGen to build a multi-agent coding system based on OptiGuide (Li et al., 2023a), a system that excels at writing code to interpret optimization solutions and answer user questions, such as exploring the implications of changing a supply-chain decision or under- standing why the optimizer made a particular choice. The second sub-figure of Figure 3 shows the AutoGen-based implementation. The workflow is as follows: the end user sends questions, such as âWhat if we prohibit shipping from supplier 1 to roastery 2?â to the Commander agent. The Com- mander coordinates with two assistant agents, including the Writer and the Safeguard, to answer the question. The Writer will craft code and send the code to the Commander. After receiving the code, the Commander checks the code safety with the Safeguard; if cleared, the Commander will use external tools (e.g., Python) to execute the code, and request the Writer to interpret the execution results. For instance, the writer may say âif we prohibit shipping from supplier 1 to roastery 2, the total cost would increase by 10.5%.â The Commander then provides this concluding answer to the end user. If, at a particular | 2308.08155#33 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08493 | 33 | 6
# 4 EXPERIMENTAL SETUP
Data: Our evaluation employs seven datasets derived from various tasks, namely classiï¬cation, summarization, and NLI. The datasets in question involve IMDB (Maas et al. 2011), AG News (Zhang et al. 2015), Yelp Full Reviews (Zhang et al. 2015), SAMSum (Gliwa et al. 2019), XSum (Narayan et al. 2018), WNLI (Wang et al. 2018), and RTE (Wang et al. 2019). In order to ensure a comprehensive experimental setup, all our experiments are carried out on both the training and test/validation splits of the aforesaid datasets. We make use of the publicly available divisions, working with the training and test splits for each. However, for the last two datasets, only the validation splits were publicly accessible with their labels. Considering our researchâs emphasis on pinpointing data contamination with minimal dataset instances, the resource constraints, and our intention to facilitate the replication of this approach by other researchers, we randomly chose 10 instances from each split for our experiments. | 2308.08493#33 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 34 | 1 to roastery 2, the total cost would increase by 10.5%.â The Commander then provides this concluding answer to the end user. If, at a particular step, there is an exception, e.g., security red flag raised by Safeguard, the Commander redirects the issue back to the Writer with debugging information. The process might be repeated multiple times until the userâs question is answered or timed-out. | 2308.08155#34 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 34 | Table 3: Out-of-domain zero-shot evaluation of contrastive pre-training with LLM-based document expansion on BEIR bench- mark. All baselines tested on nDCG@10 are based on our reproduction. Results with the increment over the corresponding baseline have been tested with two-tailed t-tests, demonstrating statistically significant improvements ( p-value ⤠0.01 ).
2022) by following their hyper-parameter settings, and other baselines are based on our settings. | 2308.08285#34 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 34 | Setting: We use snapshots of GPT-3.5 and GPT-4 from June 13, 2023âspeciï¬cally gpt-3.5-turbo-0613 and gpt-4-0613âboth accessed via the OpenAI API, as our founda- tion LLMs. To obtain deterministic results, we set the temperature to zero and capped the maximum completion length at 500 tokens. Contrarily, our comparative method (ChatGPT-Cheat?) uses the chat user interface (UI), which we also leveraged for conducting the experiment under this method. Speciï¬cally, we used the UI versions of GPT-4 and GPT-3.5 that were released on July 20, 2023. | 2308.08493#34 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 35 | With AutoGen the core workflow code for OptiGuide was reduced from over 430 lines to 100 lines, leading to significant productivity improvement. We provide a detailed comparison of user expe- rience with ChatGPT+Code Interpreter and AutoGen-based OptiGuide in Appendix D, where we show that AutoGen-based OptiGuide could save around 3x of userâs time and reduce user interac- tions by 3 - 5 times on average. We also conduct an ablation showing that multi-agent abstraction is necessary. Specifically, we construct a single-agent approach where a single agent conducts both the code-writing and safeguard processes. We tested the single- and multi-agent approaches on a dataset of 100 coding tasks, which is crafted to include equal numbers of safe and unsafe tasks. Evaluation results as reported in Figure 4d show that the multi-agent design boosts the F-1 score in identifying unsafe code by 8% (with GPT-4) and 35% (with GPT-3.5-turbo).
7Results of ReAct are obtained by directly running its official code with default settings. The code uses text-davinci-003 as backend LM and does not support GPT-3.5-turbo or GPT-4.
8
# A5: Dynamic Group Chat | 2308.08155#35 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 35 | 2022) by following their hyper-parameter settings, and other baselines are based on our settings.
We also compare with other remarkable baselines, in- cluding the traditional sparse retrieval BM25 (Robertson, Zaragoza et al. 2009), unsupervised sentence similarity encoder SimCSE (Gao, Yao, and Chen 2021), unsuper- vised contrastive pre-training method coCondenser (Gao and Callan 2022) and Contriever (Izacard et al. 2021) for zero-shot evaluation. For fine-tuned results, we also com- pare with the latest bottlenecked pre-training methods, in- cluding Condenser (Gao and Callan 2021), SimLM (Wang et al. 2022a), RetroMAE (Liu and Shao 2022) and CoT- MAE (Wu et al. 2023a). Note that the recent bottlenecked methods using multi-task pre-training (Zhou et al. 2022) or hybrid retrieval (Liu et al. 2023; Wu et al. 2023b) are not compared, as they are beyond the scope of fair comparison. | 2308.08285#35 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 35 | Human Evaluation: We undertake a human evaluation, led by two domain experts,7 to characterize contamination by identifying both exact matches and near-exact matches of individual instances. The term âexact matchesâ is self-explanatory; ânear-exact matchesâ are completions by the LLM that, while not identical, show considerable overlap and maintain signiï¬cant semantic and structural similarity to the reference instance. To generalize from individual instances to entire partitions, the human annotators followed the rule described in Algorithm 2 that was validated empirically in Section 3.3: a partition is ï¬agged as contaminated if the instance-based evaluation identiï¬es at least one exact match or at least two near-exact matches. | 2308.08493#35 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 36 | 8
# A5: Dynamic Group Chat
AutoGen provides native support for a dynamic group chat communication pattern, in which par- ticipating agents share the same context and converse with the others in a dynamic manner instead of following a pre-defined order. Dynamic group chat relies on ongoing conversations to guide the flow of interaction among agents. These make dynamic group chat ideal for situations where col- laboration without strict communication order is beneficial. In AutoGen, the GroupChatManager class serves as the conductor of conversation among agents and repeats the following three steps: dynamically selecting a speaker, collecting responses from the selected speaker, and broadcasting the message (Figure 3-A5). For the dynamic speaker-selection component, we use a role-play style prompt. Through a pilot study on 12 manually crafted complex tasks, we observed that compared to a prompt that is purely based on the task, utilizing a role-play prompt often leads to more effec- tive consideration of both conversation context and role alignment during the problem-solving and speaker-selection process. Consequently, this leads to a higher success rate and fewer LLM calls. We include detailed results in Appendix D.
# A6: Conversational Chess | 2308.08155#36 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 36 | Zero-shot Evaluation Table 1 reports the in-domain zero-shot evaluation of con- trastive pre-training with LLM-based document expansion. Pre-training with LLM-expanded queries shows clear im- provements over its baselines that merely use randomly sampled passages. This indicates that our method achieves strong zero-shot retrieval abilities for in-domain evaluation on the MS-MARCO and TREC-DL 19 & 20 datasets.
MS-MARCO passage task (in Recall@50 and Recall@1k) and TREC-DL 19 & 20 (in nDCG@10). 2) Bottlenecked query generation gives better initialization on MS-MARCO w.r.t the official preferred metric MRR@10, but still lies be- hind contrastive pre-training in other metrics. | 2308.08285#36 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 36 | Evaluation Metrics: In our analysis, the computation of the BLEURT score varies based on the structure of the dataset/instance, as this metric hinges on the ï¬uency and quality of the generated se- quence. For single-instance datasets, where individual instances are randomly cut off mid-sentence and then completed by the LLM, we join the model-produced continuation to the severed reference instance and then calculate the BLEURT score. Conversely, for instances from paired-instance and multi-sentence single-instance datasets, the BLEURT score is computed solely for the newly pro- duced sequence. We highlight that our BLEURT score computations use the most recent checkpoint provided, i.e., BLEURT-20 (Pu et al. 2021). On the other hand, regardless of the dataset/instance type, the ROUGE-L score calculation exclusively pertains to the portions of the text ï¬nished by the LLM. This is due to the scoreâs dependency on statistical attributes rather than semantic consistency. | 2308.08493#36 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08285 | 37 | Out-of-domain Evaluation We also evaluate the out-of-domain zero-shot BEIR bench- mark for contrastive pre-training with LLM-based document expansion and report the metric (nDCG@10) in Table 3. BM25 is a very strong baseline w.r.t all the other contrastive pre-training methods that do not go through human-labeled fine-tuning. Nevertheless, our method still shows strong im- provements over its contrastive baseline. Specifically, com- pared with Contriever (Izacard et al. 2021), which is an un- supervised contrastive method pre-trained on a much larger corpus CCNET (Wenzek et al. 2020), pre-training with LLM expansion also shows superior retrieval performances.
Extended Analyses In this section, we analyze the effect of scaling up LLMs and the curriculum learning strategy with expanded queries generated by Alpaca 13b 1.
# Fine-tuned Retrieval
The fine-tuned results of the two pre-training methods, i.e., contrastive pretraining and bottlenecked query generation pretraining, are presented in Table 2. Pre-training with LLM- expanded queries also gives a statistically significant boost to their baselines and counterparts. In addition, we notice that 1) Contrastive pre-training gives better results on the | 2308.08285#37 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 37 | Comparative Framework: We compare our proposed methods against the ChatGPT-Cheat? method (Sainz et al. 2023). Unlike our method, which uses a binary scale to determine contamina- tion, the comparison approach includes a âsuspiciousâ category. This designation is invoked when the LLM, upon being asked to generate the ï¬rst instances of a dataset split, outputs characteristic attributes such as data format, IDs, or other dataset-speciï¬c details instead of the actual instances. If the model, on the other hand, fails to produce these characteristics, it is deemed uncontaminated.
# 5 RESULTS AND DISCUSSION
Table 3 lists the overall accuracy of our proposed methods in 28 distinct settings: two LLMs (GPT- 4 and GPT-3.5) Ã 14 dataset partitions coming from seven datasets. Table 4 provides a detailed breakdown of each method per dataset partition and the respective LLM. We draw the following observations from our experiments:
(1) Algorithm 1, which hinges on the difference in average overlap scores between outputs from guided instruction and those from general instruction, performs well in the majority of settings. Its best performance is a success rate of 13/14 when using GPT-4 as the underlying model and 9/14 | 2308.08493#37 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 38 | With AutoGen, we enabled two essential features: (1) Natural, flexible, and engaging game dynam- ics, enabled by the customizable agent design in AutoGen. Conversational Chess supports a range of game-play patterns, including AI-AI, AI-human, and human-human, with seamless switching between these modes during a single game. An illustrative example of these entertaining game dy- namics can be found in Figure 15, Appendix D. (2) Grounding, which is a crucial aspect to maintain game integrity. During gameplay, the board agent checks each proposed move for legality; if a move is invalid, the agent responds with an error, prompting the player agent to re-propose a legal move before continuing. This process ensures that only valid moves are played and helps maintain a con- sistent gaming experience. As an ablation study, we removed the board agent and instead only relied on a relevant prompt âyou should make sure both you and the opponent are making legal movesâ to ground their move. The results highlighted that without the board agent, illegitimate moves caused game disruptions. The modular design offered flexibility, allowing swift adjustments to the board agent in response to evolving game rules or varying chess rule variants. A comprehensive demon- stration of this ablation study is in Appendix D.
# 4 Discussion | 2308.08155#38 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
2308.08285 | 38 | Effects of Scaling up LLMs We use three LLMs with different parameter sizes ranging from 3b to 13b, prompting them for document expansion and integrating the generated queries into pre-training. As shown in Table 1, scaling up the LLMs shows better re- trieval performances in zero-shot contrastive pre-training.
1Alpaca 13b is chosen because of better results in zero-shot and on-par performances in fine-tuned retrieval.
40.0 75.0 239.5 73.0 ° ® ® c & 39.0 710 9 S 2 6 38.5 69.0 < 9 & = 38.0 67.0 2 ; fe) Qa75 Bottleneck (MARCO) | 5.0 Bottleneck (DL20) F 37.0 63.0 50k 0.1M 0.4M_ 0.8M 1M 4M 8.8M Amount of Training Corpus for Fine-grained Pre-training
Figure 3: Effects of curriculum learning for fine-tuned bot- tlenecked pre-training with expanded queries generated by Alpaca 13b. The dashed lines are the corresponding base- lines from Table 2.
But this observation is not valid after fine-tuning in Table 2. We hypothesize that for fine-tuning with human labels, these LLMs are all capable enough for giving a good initialization for retrieval.
# Effects of Curriculum Learning | 2308.08285#38 | Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval | In this paper, we systematically study the potential of pre-training with
Large Language Model(LLM)-based document expansion for dense passage retrieval.
Concretely, we leverage the capabilities of LLMs for document expansion, i.e.
query generation, and effectively transfer expanded knowledge to retrievers
using pre-training strategies tailored for passage retrieval. These strategies
include contrastive learning and bottlenecked query generation. Furthermore, we
incorporate a curriculum learning strategy to reduce the reliance on LLM
inferences. Experimental results demonstrate that pre-training with LLM-based
document expansion significantly boosts the retrieval performance on
large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain
retrieval abilities, making it more widely applicable for retrieval when
initializing with no human-labeled data. | http://arxiv.org/pdf/2308.08285 | Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu | cs.IR, cs.CL | 10 pages, 3 tables, 4 figures, under review | null | cs.IR | 20230816 | 20230816 | [
{
"id": "2203.05765"
},
{
"id": "2205.09153"
},
{
"id": "2204.10641"
},
{
"id": "2212.07841"
},
{
"id": "2304.03158"
},
{
"id": "2205.12035"
},
{
"id": "2102.07662"
},
{
"id": "2003.07820"
}
] |
2308.08493 | 38 | 7The two annotators had almost perfect inter-rater agreement across all settings. This is due to the fact that a small subset of instances was used for contamination detection, and contamination is evident when it occurs.
7
Table 3: Overall accuracy at detecting contamination across 14 partitions for GPT-4 and GPT-3.5. The two LLMs are evaluated against human annotators. The âSuccess Rateâ shows how often each method matches human judgment, while the âAccuracyâ gives the corresponding percentages.
GPT-4 GPT-3.5 Method Success Rate Accuracy Success Rate Accuracy Strict Eval.: ChatGPT-Cheat? Lenient Eval.: ChatGPT-Cheat? Algorithm 1: BLEURT Algorithm 1: ROUGE-L Algorithm 2: GPT-4 ICL 0/14 9/14 11/14 13/14 14/14 0.00% 64.29% 78.57% 92.86% 100.00% 11/14 13/14 9/14 7/14 13/14 78.57% 92.86% 64.29% 50.00% 92.86% | 2308.08493#38 | Time Travel in LLMs: Tracing Data Contamination in Large Language Models | Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ "guided instruction:" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a "general instruction" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets. | http://arxiv.org/pdf/2308.08493 | Shahriar Golchin, Mihai Surdeanu | cs.CL, cs.AI, cs.CR, cs.LG | v2 preprint | null | cs.CL | 20230816 | 20231001 | [
{
"id": "2110.14168"
},
{
"id": "2204.02311"
},
{
"id": "1905.00537"
},
{
"id": "2308.08493"
},
{
"id": "2109.01652"
},
{
"id": "2306.01116"
}
] |
2308.08155 | 39 | # 4 Discussion
We introduced an open-source library, AutoGen, that incorporates the paradigms of conversable agents and conversation programming. This library utilizes capable agents that are well-suited for multi-agent cooperation. It features a unified conversation interface among the agents, along with an auto-reply mechanisms, which help establish an agent-interaction interface that capitalizes on the strengths of chat-optimized LLMs with broad capabilities while accommodating a wide range of applications. AutoGen serves as a general framework for creating and experimenting with multi- agent systems that can easily fulfill various practical requirements, such as reusing, customizing, and extending existing agents, as well as programming conversations between them. | 2308.08155#39 | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation | AutoGen is an open-source framework that allows developers to build LLM
applications via multiple agents that can converse with each other to
accomplish tasks. AutoGen agents are customizable, conversable, and can operate
in various modes that employ combinations of LLMs, human inputs, and tools.
Using AutoGen, developers can also flexibly define agent interaction behaviors.
Both natural language and computer code can be used to program flexible
conversation patterns for different applications. AutoGen serves as a generic
infrastructure to build diverse applications of various complexities and LLM
capacities. Empirical studies demonstrate the effectiveness of the framework in
many example applications, with domains ranging from mathematics, coding,
question answering, operations research, online decision-making, entertainment,
etc. | http://arxiv.org/pdf/2308.08155 | Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang | cs.AI, cs.CL | 43 pages (10 pages for the main text, 3 pages for references, and 30
pages for appendices) | null | cs.AI | 20230816 | 20231003 | [
{
"id": "2103.03874"
},
{
"id": "2303.17491"
},
{
"id": "2308.00352"
},
{
"id": "1802.08802"
},
{
"id": "2305.17126"
},
{
"id": "1706.05125"
},
{
"id": "2309.07864"
},
{
"id": "2108.11601"
},
{
"id": "2308.11432"
},
{
"id": "2305.16291"
},
{
"id": "2210.03629"
},
{
"id": "2304.07590"
},
{
"id": "2306.01337"
},
{
"id": "2305.14325"
},
{
"id": "2305.15334"
},
{
"id": "2307.16877"
},
{
"id": "2304.03442"
},
{
"id": "2307.03875"
},
{
"id": "1708.04782"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.