doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.08285
39
# Effects of Curriculum Learning To further reduce the need for LLM-expanded queries in pre-training, we attempt to use a curriculum learning strat- egy as detailed before. We use randomly sampled spans as the coarse-grained context in the first stage of curriculum pre-training for 75% of the total training steps. Then we use a small amount of LLM-expanded queries as the fine- grained context for the remaining pre-training steps. Fig- ure 3 and 4 show that both pre-training schemas benefit from curriculum learning. Bottleneck query generation out- performs its baseline with just 0.4 million LLM-expanded queries after fine-tuning. Zero-shot contrastive pre-training surpasses the baselines and continues to demonstrate sus- tainable improvements as the number of fine-grained queries increases. # Related Works # Pre-training for Dense Retrieval
2308.08285#39
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
39
when using GPT-3.5. We consider these results exciting given the algorithm’s simplicity. However, Table 3 shows that: (a) its performance is not universally good—it performs at chance level when using ROUGE-L on GPT-3.5 outputs (7/14), and (b) its success rate varies depending on the metric in use (i.e., BLEURT or ROUGE-L). (2) In contrast, Algorithm 2, which relies on GPT-4 evaluation using the few-shot ICL prompt, aligns closely with human evaluations. Specifically, in experiments run on GPT-4 and GPT-3.5, its success rates are 14/14 and 13/14, respectively. These accuracies are higher than any produced by Algorithm 1 and maintain consistency across all the settings with the two LLMs.
2308.08493#39
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
40
Our experiments, as detailed in Section 3, demonstrate that this approach offers numerous benefits. The adoption of AutoGen has resulted in improved performance (over state-of-the-art approaches), reduced development code, and decreased manual burden for existing applications. It offers flex- ibility to developers, as demonstrated in A1 (scenario 3), A5, and A6, where AutoGen enables multi-agent chats to follow a dynamic pattern rather than fixed back-and-forth interactions. It allows humans to engage in activities alongside multiple AI agents in a conversational manner. Despite the complexity of these applications (most involving more than two agents or dynamic multi-turn agent cooperation), the implementation based on AutoGen remains straightforward. Dividing tasks among separate agents promotes modularity. Furthermore, since each agent can be developed, tested, and maintained separately, this approach simplifies overall development and code management. 9
2308.08155#40
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
40
# Related Works # Pre-training for Dense Retrieval Dense passage retrieval has gained sustainable improve- ments with the recent development of pre-training tasks. Some works focus on contrastive pre-training with con- structed span relationship (Chang et al. 2020), randomly cropped spans (Gao and Callan 2022) or multiple granular- ity alignments (Ma et al. 2022). And meanwhile, the others focus on pre-training with auxiliary bottlenecked decoders, like pre-training with a weak generative decoder (Lu et al. 2021), extreme masked ratio (Liu and Shao 2022), and con- textual span sampling (Wu et al. 2023a). Our method is sim- ilar to (Gao and Callan 2022) and (Wu et al. 2023a), but our core contribution is the methodology of incorporating expanded queries generated by LLMs into such pre-training schemas, which brings better context alignment and stronger zero-shot and fine-tuned performances.
2308.08285#40
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
40
(3) Upon assessing the results of ChatGPT-Cheat? method, we discover that this method invariably labels partitions as suspicious—likely due to the precaution against generating copyrighted content which is activated by safety filters—for all scenarios involving GPT-4. Given this, we interpret the outcomes of this method through two lenses: strict and lenient evaluation. In the strict evaluation, we do not interpret the suspicious label as contaminated or uncontaminated. Under this assessment, no partition is correctly classified according to human evaluation (0/14) in settings with GPT-4, and 11/14 in settings with GPT-3.5. In the lenient evaluation, we convert the suspicious label to either contaminated or uncontaminated in a way that maximizes the performance of this method. In this setting, the ChatGPT-Cheat? method correctly identifies 9/14 and 13/14 in settings with GPT-4 and GPT-3.5, respectively. However, this lenient evaluation is unrealistic due to the overfitting in inter- preting the suspicious label. These findings support our observation that identifying contamination at the instance level, before extrapolating to the partition level, is a more resilient strategy.
2308.08493#40
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
41
9 Although this work is still in its early experimental stages, it paves the way for numerous future directions and research opportunities. For instance, we can explore effective integration of existing agent implementations into our multi-agent framework and investigate the optimal balance between automation and human control in multi-agent workflows. As we further develop and refine AutoGen, we aim to investigate which strategies, such as agent topology and conversation patterns, lead to the most effective multi-agent conversations while optimizing the overall efficiency, among other fac- tors. While increasing the number of agents and other degrees of freedom presents opportunities for tackling more complex problems, it may also introduce new safety challenges that require additional studies and careful consideration. We provide more discussion in Appendix B, including guidelines for using AutoGen and direction of future work. We hope AutoGen will help improve many LLM applications in terms of speed of development, ease of experimentation, and overall effectiveness and safety. We actively welcome contributions from the broader community. # Ethics statement There are several potential ethical considerations that could arise from the development and use of the AutoGen framework. • Privacy and Data Protection: The framework allows for human participation in conversations between agents. It is important to ensure that user data and conversations are protected, and that developers use appropriate measures to safeguard privacy.
2308.08155#41
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
41
25.0 55.0 s 20.0 ee 50.0 2 2 ee 5 S 15.0 45.0 2 = 2 QO | crc e crn enn - eee ee = 9 10.0 40.0 & rr ot = Q A fo} Q 5.0 -=-Contrast (MARCO) | 35.0 wt ~+-Contrast (DL20) i 0.0 30.0 50k 01M 04M 08M 1M 4M 88M Amount of Training Corpus for Fine-grained Pre-training Figure 4: Effects of curriculum learning for zero-shot con- trastive pre-training with LLM-expanded queries. LLM-based Query and Document Expansion Traditional query or document expansions generate addi- tional context via query rewriting (Lavrenko and Croft 2017), or with specially fine-tuned T5 (Nogueira et al. 2019) or BART models (Cho et al. 2022). With the bloom of LLMs (Ouyang et al. 2022; Touvron et al. 2023; Wang et al. 2022b), growing researches focus on using LLMs as query expansion models (Gao et al. 2023; Wang, Yang, and Wei 2023; Jagerman et al. 2023; Yu et al. 2023), which enhance the lexical match of query-passage pairs.
2308.08285#41
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
41
(4) Last but not least, the human evaluation reveals that the train and test/validation splits of both the AG News and WNLI datasets were included in GPT-4’s pre-training data. However, for IMDB and RTE, only the training partitions were incorporated, while for XSum, only the test split was leaked. For GPT-3.5, the only data exposure was the test partition of the XSum dataset. These findings confirm that, despite their creators’ efforts, today’s LLMs have ingested NLP datasets. We hope that this observation informs the design of better scientific experiments with LLMs in the NLP space in the future. # 6 CONCLUSION
2308.08493#41
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
42
• Privacy and Data Protection: The framework allows for human participation in conversations between agents. It is important to ensure that user data and conversations are protected, and that developers use appropriate measures to safeguard privacy. • Bias and Fairness: LLMs have been shown to exhibit biases present in their training data (Navigli et al., 2023). When using LLMs in the AutoGen framework, it is crucial to address and mitigate any biases that may arise in the conversations between agents. Developers should be aware of potential biases and take steps to ensure fairness and inclusivity. • Accountability and Transparency: As discussed in the future work section, as the framework in- volves multiple agents conversing and cooperating, it is important to establish clear accountability and transparency mechanisms. Users should be able to understand and trace the decision-making process of the agents involved in order to ensure accountability and address any potential issues or biases.
2308.08155#42
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
42
However, as discussed before, LLM-based document ex- pansion is yet lacking exploration due to expensive infer- ence costs brought by the huge amount of documents and the online inference issue. We propose to tackle those issues with pre-training techniques and curriculum learning strate- gies tailored for dense retrieval. Our method is also orthog- onal to traditional query and document expansion and can incorporate them into the retrieval stage. Conclusion This paper systematically studies the potential of pre- training with Large Language Model-based document ex- pansion for dense passage retrieval. Strong improvements in zero-shot and out-of-domain performances are observed in contrastive pre-training with LLM-based document ex- pansion. Moreover, both contrastive pretraining and bottle- necked query generation pretraining achieve good retrieval abilities after fine-tuning. We further propose a two-stage curriculum learning strategy that can greatly reduce the need for LLM-expanded queries in pre-training, while keeping the minor performance degeneration. LLMs excel in ex- panding high-quality queries with enriched context informa- tion, which is suitable for scenarios lacking in human anno- tations. Researchers can thus deploy quick initialization of an unsupervised dense retrieval system with the pre-training of LLM-based document expansion, with even NO human labels provided.
2308.08285#42
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
42
# 6 CONCLUSION We proposed a novel method to detect data contamination in LLMs, assuming no access to their pre-training data. Our approach begins by pinpointing data contamination at the instance level. This was achieved by prompting the LLM to produce the replica of the secondary segment of a dataset instance given its random-length initial segment, dataset name, and partition type, a process we called “guided instruction.” From here, we adopted a set of rules to generalize from instance-level to broader partition-level contamination. This involved leveraging statistically significant differ- ences from BLEURT and ROUGE-L scores between generated completions by guided and general instructions, as well as evaluations from GPT-4 with few-shot in-context learning prompting. Our evaluation spanned 28 different settings, including seven datasets along with their respective train and test/validation partitions and two LLMs: GPT-4 and GPT-3.5. Our findings indicated that while the replication technique via guided instruction is notably effective, the most accurate eval- uation approach that was closely aligned with human judgments for detecting data contamination 8
2308.08493#42
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
43
Trust and Reliance: AutoGen leverages human understanding and intelligence while providing automation through conversations between agents. It is important to consider the impact of this interaction on user experience, trust, and reliance on AI systems. Clear communication and user education about the capabilities and limitations of the system will be essential (Cai et al., 2019). • Unintended Consequences: As discussed before, the use of multi-agent conversations and automa- tion in complex tasks may have unintended consequences. In particular, allowing LLM agents to make changes in external environments through code execution or function calls, such as installing packages, could be risky. Developers should carefully consider the potential risks and ensure that appropriate safeguards are in place to prevent harm or negative outcomes. # Acknowledgements
2308.08155#43
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
43
Table 4: An assessment of our proposed methods in contrast to ChatGPT-Cheat? method. We eval- uate Algorithm 1 using BLEURT and ROUGE-L, as well as Algorithm 2 which relies on GPT-4 decisions via few-shot ICL prompting. The evaluations are performed on 10 instances randomly drawn from each split of a particular dataset, with GPT-4 and GPT-3.5 serving as the LLMs that are investigated. Partition-level contamination is represented in the following ways: (1) While asterisks (*) indicate statistically significant differences between the completions produced by guided and general instructions (as measured by BLEURT and ROUGE-L), underlined numbers indicate set- tings that align with human evaluations (Algorithm 1). (2) A single tick (X) points to the presence of at least one exact match, while a double tick (XX) signals the identification of two or more near- exact matches (Algorithm 2). A cross sign (×) denotes that neither of the aforementioned conditions were met. For the ChatGPT-Cheat? method, this cross sign indicates that the model’s output does not
2308.08493#43
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
44
The work presented in this report was made possible through discussions and feedback from Peter Lee, Johannes Gehrke, Eric Horvitz, Steven Lucco, Umesh Madan, Robin Moeur, Piali Choud- hury, Saleema Amershi, Adam Fourney, Victor Dibia, Guoqing Zheng, Corby Rosset, Ricky Loynd, Ece Kamar, Rafah Hosn, John Langford, Ida Momennejad, Brian Krabach, Taylor Webb, Shanka Subhra Mondal, Wei-ge Chen, Robert Gruen, Yinan Li, Yue Wang, Suman Nath, Tanakorn Leesat- apornwongsa, Xin Wang, Shishir Patil, Tianjun Zhang, Saehan Jo, Ishai Menache, Kontantina Mel- lou, Runlong Zhou, Feiran Jia, Hamed Khanpour, Hamid Palangi, Srinagesh Sharma, Julio Albinati Cortez, Amin Saied, Yuzhe Ma, Dujian Ding, Linyong Nan, Prateek Yadav, Shannon Shen, Ankur Mallick, Mark Encarnaci´on, Lars
2308.08155#44
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
44
References Bengio, Y.; Louradour, J.; Collobert, R.; and Weston, J. 2009. Curriculum learning. In Danyluk, A. P.; Bottou, L.; and Littman, M. L., eds., Proceedings of the 26th Annual In- ternational Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, 41–48. ACM. Cai, D.; Wang, Y.; Liu, L.; and Shi, S. 2022. Recent ad- vances in retrieval-augmented text generation. In Proceed- ings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 3417– 3419. Chang, W.; Yu, F. X.; Chang, Y.; Yang, Y.; and Kumar, S. 2020. Pre-training Tasks for Embedding-based Large-scale Retrieval. In 8th International Conference on Learning Rep- resentations, ICLR 2020, Addis Ababa, Ethiopia, April 26- 30, 2020. OpenReview.net. Cho, S.; Jeong, S.; Yang, W.; and Park, J. C. 2022. Query
2308.08285#44
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08285
45
Ethiopia, April 26- 30, 2020. OpenReview.net. Cho, S.; Jeong, S.; Yang, W.; and Park, J. C. 2022. Query Generation with External Knowledge for Dense Retrieval. In Agirre, E.; Apidianaki, M.; and Vulic, I., eds., Proceedings of Deep Learning Inside Out: The 3rd Workshop on Knowl- edge Extraction and Integration for Deep Learning Archi- tectures, DeeLIO@ACL 2022, Dublin, Ireland and Online, May 27, 2022, 22–32. Association for Computational Lin- guistics. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; Schuh, P.; Shi, K.; Tsvyashchenko, S.; Maynez, J.; Rao, A.; Barnes, P.; Tay, Y.; Shazeer, N.; Prabhakaran, V.; Reif, E.; Du, N.; Hutchinson, B.; Pope, R.; Bradbury, J.; Austin, J.; Isard, M.; Gur-Ari, G.; Yin, P.;
2308.08285#45
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08155
46
10 # References Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, and Siva Reddy. Eval- uating correctness and faithfulness of instruction-following models for question answering. arXiv preprint arXiv:2307.16877, 2023. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Col- lisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. Guidelines for human-ai interaction. In Proceedings of the 2019 chi conference on human factors in computing systems, 2019. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Con- crete problems in ai safety, 2016. # AutoGPT. Documentation — auto-gpt. https://docs.agpt.co/, 2023. BabyAGI. Github — babyagi. https://github.com/yoheinakajima/babyagi, 2023.
2308.08155#46
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
46
Du, N.; Hutchinson, B.; Pope, R.; Bradbury, J.; Austin, J.; Isard, M.; Gur-Ari, G.; Yin, P.; Duke, T.; Levskaya, A.; Ghemawat, S.; Dev, S.; Michalewski, H.; Garcia, X.; Misra, V.; Robinson, K.; Fe- dus, L.; Zhou, D.; Ippolito, D.; Luan, D.; Lim, H.; Zoph, B.; Spiridonov, A.; Sepassi, R.; Dohan, D.; Agrawal, S.; Omer- nick, M.; Dai, A. M.; Pillai, T. S.; Pellat, M.; Lewkowycz, A.; Moreira, E.; Child, R.; Polozov, O.; Lee, K.; Zhou, Z.; Wang, X.; Saeta, B.; Diaz, M.; Firat, O.; Catasta, M.; Wei, J.; Meier-Hellstern, K.; Eck, D.; Dean, J.; Petrov, S.; and Fiedel, N. 2022. PaLM: Scaling Language Modeling with Pathways. CoRR, abs/2204.02311. Craswell, N.; Mitra, B.;
2308.08285#46
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
46
Split Instruct. Alg. 1: BLEURT 0.47 0.43 0.54 0.41 *0.60 *0.62 0.41 0.50 0.50 0.38 *0.53 *0.65 0.63 *0.70 0.62 *0.72 0.58 0.58 0.58 0.59 General Guided General Guided 0.43 0.48 0.43 0.42 Train Test/Valid 0.54 0.60 0.64 0.67 Alg. 1: ROUGE-L 0.17 *0.35 0.16 *0.37 0.13 0.14 0.12 0.15 0.26 0.15 0.41 0.17 *0.51 *0.59 0.15 0.31 0.36 0.16 0.34 *0.63 0.14 *0.24 0.16 0.16 General Guided General Guided Train Test/Valid 0.18 *0.38 0.23 *0.38 Alg. 2: GPT-4 ICL X × X XX × XX X X × × Guided Train Test/Valid Guided × × × X ChatGPT-Cheat? Train Guided Test/Valid Guided ? ? ? ? ? ? ? ? ? ? ? ? ? ? Human Evaluation X × X XX × XX X X × ×
2308.08493#46
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
47
BabyAGI. Github — babyagi. https://github.com/yoheinakajima/babyagi, 2023. Carrie J. Cai, Samantha Winter, David F. Steiner, Lauren Wilcox, and Michael Terry. ”hello ai”: Uncovering the onboarding needs of medical practitioners for human-ai collaborative decision- making. Proceedings of the ACM on Human-Computer Interaction, 2019. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023. # Chroma. Chromadb. https://github.com/chroma-core/chroma, 2023. Victor Dibia. LIDA: A tool for automatic generation of grammar-agnostic visualizations and info- graphics using large language models. In Proceedings of the 61st Annual Meeting of the Associ- ation for Computational Linguistics (Volume 3: System Demonstrations), Toronto, Canada, July 2023. Association for Computational Linguistics. Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. Self-collaboration code generation via chatgpt. arXiv preprint arXiv:2304.07590, 2023.
2308.08155#47
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
47
N. 2022. PaLM: Scaling Language Modeling with Pathways. CoRR, abs/2204.02311. Craswell, N.; Mitra, B.; Yilmaz, E.; and Campos, D. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662. Craswell, N.; Mitra, B.; Yilmaz, E.; Campos, D.; and Voorhees, E. M. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association
2308.08285#47
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
47
Train Guided Test/Valid Guided ? ? ? ? ? ? ? ? ? ? ? ? ? ? Human Evaluation X × X XX × XX X X × × Train Guided Test/Valid Guided × × × X Alg. 1: BLEURT 0.59 0.58 0.58 0.59 0.45 0.50 0.49 0.42 0.50 *0.56 0.47 0.42 0.47 0.40 *0.53 *0.54 General Guided General Guided 0.58 *0.64 0.60 0.62 0.45 0.39 0.45 0.43 Train Test/Valid 0.54 0.56 0.62 0.62 Alg. 1: ROUGE-L 0.12 0.12 0.13 0.14 General Guided General Guided 0.10 0.11 0.13 0.17 0.06 *0.16 0.10 *0.20 0.13 0.37 0.29 *0.16 0.32 *0.43 0.11 0.23 0.32 *0.14 0.31 *0.42 Train Test/Valid 0.14 0.22 0.18 0.23 Alg. 2: GPT-4 ICL Train Guided Test/Valid Guided × × × × × × × × × × × × × ×
2308.08493#47
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
48
Improv- ing factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. Atty Eleti, Jeff Harris, and Logan Kilpatrick. Function calling and other api updates. https: //openai.com/blog/function-calling-and-other-api-updates, 2023. # Guidance. Guidance. https://github.com/guidance-ai/guidance, 2023. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023. Eric Horvitz. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, 1999.
2308.08155#48
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
48
for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), 4171–4186. Min- neapolis, Minnesota: Association for Computational Lin- guistics. Gao, L.; and Callan, J. 2021. Condenser: a Pre-training In Proceedings of the Architecture for Dense Retrieval. 2021 Conference on Empirical Methods in Natural Lan- guage Processing, 981–993. Online and Punta Cana, Do- minican Republic: Association for Computational Linguis- tics. Gao, L.; and Callan, J. 2022. Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval. In Proceedings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), 2843–2853. Dublin, Ireland: Association for Computational Linguistics. Gao, L.; Ma, X.; Lin, J.; and Callan, J. 2022. Tevatron: An efficient and flexible toolkit for dense retrieval. arXiv preprint arXiv:2203.05765. Gao, L.; Ma, X.; Lin, J.; and Callan, J.
2308.08285#48
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08155
49
Eric Horvitz. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, 1999. HuggingFace. Transformers agent. https://huggingface.co/docs/transformers/ transformers_agents, 2023. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 2019. 11 LangChain. Introduction — langchain. https://python.langchain.com/en/latest/index. html, 2023. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? end- to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125, 2017.
2308.08155#49
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
49
dense retrieval. arXiv preprint arXiv:2203.05765. Gao, L.; Ma, X.; Lin, J.; and Callan, J. 2023. Precise Zero- Shot Dense Retrieval without Relevance Labels. In Rogers, A.; Boyd-Graber, J. L.; and Okazaki, N., eds., Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, 1762–1777. Association for Computational Linguistics. Gao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Con- trastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 6894–6910. Online and Punta Cana, Dominican Republic: Association for Computational Lin- guistics. Izacard, G.; Caron, M.; Hosseini, L.; Riedel, S.; Bojanowski, P.; Joulin, A.; and Grave, E. 2021. Towards Unsuper- vised Dense Information Retrieval with Contrastive
2308.08285#49
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
49
was the few-shot in-context learning prompt with GPT-4, which integrates a few example instances from human assessments in the input prompt. This method yielded a success rate in pinpointing data contamination across 14/14 scenarios for GPT-4 and 13/14 for GPT-3.5. 9 # REFERENCES Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. A multitask, mul- tilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity, 2023. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. TAC, 7:8, 2009.
2308.08493#49
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
50
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented gen- eration for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 2020. Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, and Ishai Menache. Large language models for supply chain optimization. arXiv preprint arXiv:2307.03875, 2023a. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for ”mind” exploration of large scale language model society, 2023b. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi- agent debate, 2023.
2308.08155#50
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
50
Bojanowski, P.; Joulin, A.; and Grave, E. 2021. Towards Unsuper- vised Dense Information Retrieval with Contrastive Learn- ing. CoRR, abs/2112.09118. Jagerman, R.; Zhuang, H.; Qin, Z.; Wang, X.; and Bender- sky, M. 2023. Query Expansion by Prompting Large Lan- guage Models. CoRR, abs/2305.03653. Karpukhin, V.; Oguz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov, S.; Chen, D.; and Yih, W.-t. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), 6769–6781. Online: Associa- tion for Computational Linguistics. Khattab, O.; and Zaharia, M. 2020. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interac- tion over BERT. In Huang, J. X.; Chang, Y.; Cheng, X.; Kamps, J.; Murdock, V.; Wen,
2308.08285#50
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
50
Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. Sebastian Bordt and Ulrike von Luxburg. Chatgpt participates in a computer science exam. ArXiv, abs/2303.09461, 2023.
2308.08493#50
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
51
Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. arXiv preprint arXiv:1802.08802, 2018. Jerry Liu. LlamaIndex, November 2022. URL https://github.com/jerryjliu/llama_index. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Roberto Navigli, Simone Conia, and Bj¨orn Ross. Biases in large language models: Origins, inven- tory and discussion. ACM Journal of Data and Information Quality, 2023. OpenAI. ChatGPT plugins. https://openai.com/blog/chatgpt-plugins, 2023. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023.
2308.08155#51
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
51
Interac- tion over BERT. In Huang, J. X.; Chang, Y.; Cheng, X.; Kamps, J.; Murdock, V.; Wen, J.; and Liu, Y., eds., Proceed- ings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, 39–48. ACM. Lavrenko, V.; and Croft, W. B. 2017. Relevance-Based Lan- guage Models. SIGIR Forum, 51(2): 260–267. Lewis, P. S. H.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; K¨uttler, H.; Lewis, M.; Yih, W.; Rockt¨aschel, T.; Riedel, S.; and Kiela, D. 2020. Retrieval-Augmented
2308.08285#51
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
51
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
2308.08493#51
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
52
Md Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Retrieval augmented code generation and summarization. arXiv preprint arXiv:2108.11601, 2021. Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/ abs/1908.10084. Semantic-Kernel. Semantic kernel. https://github.com/microsoft/semantic-kernel, 2023.
2308.08155#52
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
52
Generation for Knowledge-Intensive NLP Tasks. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Pro- cessing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Liu, Y.; Lu, W.; Cheng, S.; Shi, D.; Wang, S.; Cheng, Z.; and Yin, D. 2021. Pre-trained Language Model for Web-scale Retrieval in Baidu Search. In Zhu, F.; Ooi, B. C.; and Miao, C., eds., KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Sin- gapore, August 14-18, 2021, 3365–3375. ACM. Liu, Z.; and Shao, Y. 2022. RetroMAE: Pre-training Retrieval-oriented Transformers via Masked Auto-Encoder. arXiv preprint arXiv:2205.12035. Liu, Z.; Xiao, S.; Shao, Y.; and Cao, Z. 2023. RetroMAE-2:
2308.08285#52
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
52
S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4, 2023. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models, 2021. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models, 2023. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. A survey on evaluation of large language models, 2023.
2308.08493#52
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
53
Semantic-Kernel. Semantic kernel. https://github.com/microsoft/semantic-kernel, 2023. Bokui Shen, Fei Xia, Chengshu Li, Roberto Mart´ın-Mart´ın, Linxi Fan, Guanzhi Wang, Claudia P´erez-D’Arpino, Shyamal Buch, Sanjana Srivastava, Lyne Tchapmi, et al. igibson 1.0: A simu- lation environment for interactive tasks in large realistic scenes. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021. Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning. PMLR, 2017. 12 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cˆot´e, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. ALFWorld: Aligning Text and Embodied Environments for Interactive Learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2021. URL https://arxiv.org/abs/2010.03768.
2308.08155#53
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
53
Liu, Z.; Xiao, S.; Shao, Y.; and Cao, Z. 2023. RetroMAE-2: Duplex Masked Auto-Encoder For Pre-Training Retrieval- Oriented Language Models. In Rogers, A.; Boyd-Graber, J. L.; and Okazaki, N., eds., Proceedings of the 61st An- nual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, 2635–2648. Association for Computational Linguistics. Lu, S.; He, D.; Xiong, C.; Ke, G.; Malik, W.; Dou, Z.; Ben- nett, P.; Liu, T.-Y.; and Overwijk, A. 2021. Less is More: Pretrain a Strong Siamese Encoder for Dense Text Retrieval Using a Weak Decoder. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Process- ing, 2780–2791. Lu, Y.; Liu, Y.; Liu, J.; Shi, Y.; Huang, Z.; Sun, S. F. Y.; Tian, H.; Wu, H.;
2308.08285#53
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
53
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pel- lat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models, 2022.
2308.08493#53
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
54
Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich K¨uttler, John Agapiou, Julian Schrittwieser, et al. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782, 2017. Chi Wang, Qingyun Wu, Markus Weimer, and Erkang Zhu. Flaml: A fast and lightweight automl library. Proceedings of Machine Learning and Systems, 2021. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023b.
2308.08155#54
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
54
Y.; Liu, Y.; Liu, J.; Shi, Y.; Huang, Z.; Sun, S. F. Y.; Tian, H.; Wu, H.; Wang, S.; Yin, D.; et al. 2022. Ernie-search: Bridging cross-encoder with dual-encoder via self on-the- fly distillation for dense passage retrieval. arXiv preprint arXiv:2205.09153. Ma, X.; Guo, J.; Zhang, R.; Fan, Y.; and Cheng, X. 2022. Pre-train a Discriminative Text Encoder for Dense Re- arXiv preprint trieval via Contrastive Span Prediction. arXiv:2204.10641. Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016. MS MARCO: A Hu- man Generated MAchine Reading COmprehension Dataset. In Besold, T. R.; Bordes, A.; d’Avila Garcez, A. S.; and Wayne, G., eds., Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches
2308.08285#54
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
54
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine learning challenges workshop, pp. 177–190. Springer, 2005. 10 Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. Glam: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, pp. 5547– 5569. PMLR, 2022. B. Efron. Bootstrap Methods: 7(1):1 – 26, Another Look at the Jackknife. nals of Statistics, https://doi.org/10.1214/aos/1176344552. 1979. doi: 10.1214/aos/1176344552. The An- URL
2308.08493#54
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
55
Daniel S. Weld and Oren Etzioni. The first law of robotics (a call to arms). In AAAI Conference on Artificial Intelligence, 1994. Max Woolf. Langchain problem. https://minimaxir.com/2023/07/langchain-problem/, 2023. Yiran Wu, Feiran Jia, Shaokun Zhang, Qingyun Wu, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, and Chi Wang. An empirical study on challenging math problem solving with gpt-4. arXiv preprint arXiv:2306.01337, 2023. Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. 13 # A Related Work
2308.08155#55
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
55
Garcez, A. S.; and Wayne, G., eds., Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, volume 1773 of CEUR Workshop Proceedings. CEUR-WS.org. Nogueira, R. F.; Yang, W.; Lin, J.; and Cho, K. 2019. CoRR, Document Expansion by Query Prediction. abs/1904.08375. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C. L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; Schulman, J.; Hilton, J.; Kelton, F.; Miller, L.; Simens, M.; Askell, A.; Welinder, P.; Christiano, P. F.; Leike, J.; and Lowe, R. 2022. Training language models to follow instruc- tions with human feedback. In NeurIPS.
2308.08285#55
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
55
Bradley Efron. Second Thoughts on the Bootstrap. Statistical Science, 18(2):135 – 140, 2003. doi: 10.1214/ss/1063994968. URL https://doi.org/10.1214/ss/1063994968. Bradley Efron and Robert J. Tibshirani. An Introduction to the Bootstrap. Number 57 in Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, Boca Raton, Florida, USA, 1993. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entail- ment and Paraphrasing, pp. 1–9, Prague, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/W07-1401.
2308.08493#55
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
56
13 # A Related Work We examine existing LLM-based agent systems or frameworks that can be used to build LLM appli- cations. We categorize the related work into single-agent and multi-agent systems and specifically provide a summary of differentiators comparing AutoGen with existing multi-agent systems in Ta- ble 1. Note that many of these systems are evolving open-source projects, so the remarks and statements about them may only be accurate as of the time of writing. We refer interested readers to detailed LLM-based agent surveys (Xi et al., 2023; Wang et al., 2023b) # Single-Agent Systems: AutoGPT: AutoGPT is an open-source implementation of an AI agent that attempts to au- tonomously achieve a given goal (AutoGPT, 2023). It follows a single-agent paradigm in which it augments the AI model with many useful tools, and does not support multi-agent collaboration. • ChatGPT+ (with code interpreter or plugin): ChatGPT, a conversational AI service or agent, can now be used alongside a code interpreter or plugin (currently available only under the pre- mium subscription plan ChatGPT Plus) (OpenAI, 2023). The code interpreter enables ChatGPT to execute code, while the plugin enhances ChatGPT with a wide range of curated tools.
2308.08155#56
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
56
Qu, Y.; Ding, Y.; Liu, J.; Liu, K.; Ren, R.; Zhao, W. X.; Dong, D.; Wu, H.; and Wang, H. 2021. RocketQA: An Op- timized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2021 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, 5835–5847. Online: Association for Compu- tational Linguistics. Ren, R.; Qu, Y.; Liu, J.; Zhao, W. X.; She, Q.; Wu, H.; Wang, H.; and Wen, J.-R. 2021. RocketQAv2: A Joint Train- ing Method for Dense Passage Retrieval and Passage Re- ranking. In Proceedings of the 2021 Conference on Empir- ical Methods in Natural Language Processing, 2825–2835. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics. Robertson, S.; Zaragoza, H.; et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval,
2308.08285#56
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
56
SAMSum corpus: Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. In Proceedings of the A human-annotated dialogue dataset for abstractive summarization. 2nd Workshop on New Frontiers in Summarization, pp. 70–79, Hong Kong, China, Novem- ber 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5409. URL https://www.aclweb.org/anthology/D19-5409. R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan In Proceedings of the Szpektor. The second pascal recognising textual entailment challenge. Second PASCAL Challenges Workshop on Recognising Textual Entailment, volume 7, pp. 785– 794, 2006. Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models, 2022.
2308.08493#56
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
57
• LangChain Agents: LangChain is a general framework for developing LLM-based applica- tions (LangChain, 2023). LangChain Agents is a subpackage for using an LLM to choose a sequence of actions. There are various types of agents in LangChain Agents, with the ReAct agent being a notable example that combines reasoning and acting when using LLMs (mainly designed for LLMs prior to ChatGPT) (Yao et al., 2022). All agents provided in LangChain Agents fol- low a single-agent paradigm and are not inherently designed for communicative and collaborative modes. A significant summary of its limitations can be found in (Woolf, 2023). Due to these lim- itations, even the multi-agent systems in LangChain (e.g., re-implementation of CAMEL) are not based on LangChain Agents but are implemented from scratch. Their connection to LangChain lies in the use of basic orchestration modules provided by LangChain, such as AI models wrapped by LangChain and the corresponding interface. • Transformers Agent: Transformers Agent (HuggingFace, 2023) is an experimental natural- language API built on the transformers repository. It includes a set of curated tools and an agent to interpret natural language and use these tools. Similar to AutoGPT, it follows a single-agent paradigm and does not support agent collaboration.
2308.08155#57
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
57
Zaragoza, H.; et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval, 3(4): 333–389. Sakata, W.; Shibata, T.; Tanaka, R.; and Kurohashi, S. 2019. FAQ Retrieval using Query-Question Similarity and BERT- Based Query-Answer Relevance. In Piwowarski, B.; Cheva- lier, M.; Gaussier, ´E.; Maarek, Y.; Nie, J.; and Scholer, F., eds., Proceedings of the 42nd International ACM SI- GIR Conference on Research and Development in Informa- tion Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, 1113–1116. ACM. Santhanam, K.; Khattab, O.; Saad-Falcon, J.; Potts, C.; and Zaharia, M. 2022. ColBERTv2: Effective and Efficient Re- In Proceedings trieval via Lightweight Late Interaction. of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, 3715–3734. Seattle, United States: As-
2308.08285#57
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
57
Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models, 2022. Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. Openassistant conversations – democratizing large language model align- ment, 2023. Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thir- teenth international conference on the principles of knowledge representation and reasoning, 2012. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguis- tics. URL https://aclanthology.org/W04-1013.
2308.08493#57
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
58
AutoGen differs from the single-agent systems above by supporting multi-agent LLM applications. # Multi-Agent Systems: • BabyAGI: BabyAGI (BabyAGI, 2023) is an example implementation of an AI-powered task man- agement system in a Python script. In this implemented system, multiple LLM-based agents are used. For example, there is an agent for creating new tasks based on the objective and the result of the previous task, an agent for prioritizing the task list, and an agent for completing tasks/sub-tasks. As a multi-agent system, BabyAGI adopts a static agent conversation pattern, i.e., a predefined order of agent communication, while AutoGen supports both static and dynamic conversation patterns and additionally supports tool usage and human involvement. It demonstrates how role playing can be used to let chat agents communicate with each other for task comple- tion. It also records agent conversations for behavior analysis and capability understanding. An Inception-prompting technique is used to achieve autonomous cooperation between agents. Un- like AutoGen, CAMEL does not natively support tool usage, such as code execution. Although it is proposed as an infrastructure for multi-agent conversation, it only supports static conversation patterns, while AutoGen additionally supports dynamic conversation patterns.
2308.08155#58
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
58
of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, 3715–3734. Seattle, United States: As- sociation for Computational Linguistics. Thakur, N.; Reimers, N.; R¨uckl´e, A.; Srivastava, A.; and Gurevych, I. 2021. BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models. CoRR, abs/2104.08663. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; and Lample, G. 2023. LLaMA: Open and Efficient Foundation Language Models. CoRR, abs/2302.13971. Wang, L.; Yang, N.; Huang, X.; Jiao, B.; Yang, L.; Jiang, D.; Majumder, R.; and Wei, F. 2022a. SimLM: Pre-training with Representation Bottleneck for Dense Passage
2308.08285#58
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
58
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christo- In Proceedings of the 49th Annual pher Potts. Learning word vectors for sentiment analysis. Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. ArXiv, abs/1808.08745, 2018. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. 11
2308.08493#58
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
59
• Multi-Agent Debate: Two recent works investigate and show that multi-agent debate is an effec- tive way to encourage divergent thinking in LLMs (Liang et al., 2023) and to improve the factuality and reasoning of LLMs (Du et al., 2023). In both works, multiple LLM inference instances are constructed as multiple agents to solve problems with agent debate. Each agent is simply an LLM inference instance, while no tool or human is involved, and the inter-agent conversation needs to follow a pre-defined order. These works attempt to build LLM applications with multi-agent conversation, while AutoGen, designed as a generic infrastructure, can be used to facilitate this development and enable more applications with dynamic conversation patterns. 14 • MetaGPT: MetaGPT (Hong et al., 2023) is a specialized LLM application based on a multi-agent conversation framework for automatic software development. They assign different roles to GPTs to collaboratively develop software. They differ from AutoGen by being specialized solutions to a certain scenario, while AutoGen is a generic infrastructure to facilitate building applications for various scenarios.
2308.08155#59
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
59
L.; Jiang, D.; Majumder, R.; and Wei, F. 2022a. SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval. CoRR, abs/2207.02578. Query2doc: Wang, L.; Yang, N.; and Wei, F. 2023. Query Expansion with Large Language Models. CoRR, abs/2303.07678. Wang, Y.; Kordi, Y.; Mishra, S.; Liu, A.; Smith, N. A.; Khashabi, D.; and Hajishirzi, H. 2023. Self-Instruct: Align- ing Language Models with Self-Generated Instructions. In Rogers, A.; Boyd-Graber, J. L.; and Okazaki, N., eds., Pro- ceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, 13484–13508. As- sociation for Computational Linguistics.
2308.08285#59
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
59
11 Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. Amy Pu, Hyung Won Chung, Ankur P Parikh, Sebastian Gehrmann, and Thibault Sellam. Learning compact metrics for mt. In Proceedings of EMNLP, 2021. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language under- standing by generative pre-training. 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Partha Pratim Ray. Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 2023.
2308.08493#59
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
60
There are a few other specialized single-agent or multi-agent systems, such as Voyager (Wang et al., 2023a) and Generative Agents (Park et al., 2023), which we skip due to lower relevance. In Table 1, we summarize differences between AutoGen and the most relevant multi-agent systems. Table 1: Summary of differences between AutoGen and other related multi-agent systems. infras- tructure: whether the system is designed as a generic infrastructure for building LLM applications. conversation pattern: the types of patterns supported by the implemented systems. Under the ‘static’ pattern, agent topology remains unchanged regardless of different inputs. AutoGen allows flexible conversation patterns, including both static and dynamic patterns that can be customized based on different application needs. execution-capable: whether the system can execute LLM- generated code; human involvement: whether (and how) the system allows human participation during the execution process of the system. AutoGen allows flexible human involvement in multi- agent conversation with the option for humans to skip providing inputs. Aspect Infrastructure Conversation pattern Execution-capable Human involvement AutoGen Multi-agent Debate ✓ flexible ✓ chat/skip ✗ static ✗ ✗ CAMEL BabyAGI MetaGPT ✗ static ✗ ✗ ✓ static ✗ ✗ ✗ static ✓ ✗ 15 # B Expanded Discussion
2308.08155#60
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
60
Wang, Y.; Mishra, S.; Alipoormolabashi, P.; Kordi, Y.; Mirzaei, A.; Naik, A.; Ashok, A.; Dhanasekaran, A. S.; Arunkumar, A.; Stap, D.; Pathak, E.; Karamanolakis, G.; Lai, H. G.; Purohit, I.; Mondal, I.; Anderson, J.; Kuznia, K.; Doshi, K.; Pal, K. K.; Patel, M.; Moradshahi, M.; Par- mar, M.; Purohit, M.; Varshney, N.; Kaza, P. R.; Verma, P.; Puri, R. S.; Karia, R.; Doshi, S.; Sampat, S. K.; Mishra, S.; A, S. R.; Patro, S.; Dixit, T.; and Shen, X. 2022b. Super- NaturalInstructions: Generalization via Declarative Instruc- tions on 1600+ NLP Tasks. In Goldberg, Y.; Kozareva, Z.; and Zhang, Y., eds., Proceedings of the 2022 Confer- ence on Empirical Methods in Natural Language Process- ing, EMNLP 2022, Abu Dhabi, United
2308.08285#60
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08155
61
15 # B Expanded Discussion The applications in Section 3 show how AutoGen not only enables new applications but also helps renovate existing ones. For example, in A1 (scenario 3), A5, and A6, AutoGen enabled the cre- ation of multi-agent conversations that follow a dynamic pattern instead of a fixed back-and-forth. And in both A5 and A6, humans can participate in the activities together with multiple other AI agents in a conversational manner. Similarly, A1-A4 show how popular applications can be reno- vated quickly with AutoGen. Despite the complexity of these applications (most of them involve more than two agents or dynamic multi-turn agent cooperation), our AutoGen-based implementa- tion remains simple, demonstrating promising opportunities to build creative applications and a large space for innovation. In reflecting on why these benefits can be achieved in these applications with AutoGen, we believe there are a few reasons: • Ease of use: The built-in agents can be used out-of-the-box, delivering strong performance even without any customization. (A1, A3) • Modularity: The division of tasks into separate agents promotes modularity in the system. Each agent can be developed, tested, and maintained independently, simplifying the overall develop- ment process and facilitating code management. (A3, A4, A5, and A6)
2308.08155#61
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
61
Y., eds., Proceedings of the 2022 Confer- ence on Empirical Methods in Natural Language Process- ing, EMNLP 2022, Abu Dhabi, United Arab Emirates, De- cember 7-11, 2022, 5085–5109. Association for Computa- tional Linguistics. Wenzek, G.; Lachaux, M.; Conneau, A.; Chaudhary, V.; Guzm´an, F.; Joulin, A.; and Grave, E. 2020. CCNet: Extract- ing High Quality Monolingual Datasets from Web Crawl In Calzolari, N.; B´echet, F.; Blache, P.; Choukri, Data. K.; Cieri, C.; Declerck, T.; Goggi, S.; Isahara, H.; Mae- gaard, B.; Mariani, J.; Mazo, H.; Moreno, A.; Odijk, J.; and Piperidis, S., eds., Proceedings of The 12th Language Re- sources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, 4003–4012. European Language Resources Association. Wu, X.; Ma, G.; and Hu, S. 2022. Query-as- context
2308.08285#61
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
61
Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generalization, 2022.
2308.08493#61
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
62
• Programmability: AutoGen allows users to extend/customize existing agents to develop systems satisfying their specific needs with ease. (A1-A6). For example, with AutoGen, the core workflow code in A4 is reduced from over 430 lines to 100 lines, for a 4x saving. Allowing human involvement: AutoGen provides a native mechanism to achieve human partici- pation and/or human oversight. With AutoGen, humans can seamlessly and optionally cooperate with AIs to solve problems or generally participate in the activity. AutoGen also facilitates inter- active user instructions to ensure the process stays on the desired path. (A1, A2, A5, and A6) • Collaborative/adversarial agent interactions: Like many collaborative agent systems (Dong et al., 2023), agents in AutoGen can share information and knowledge, to complement each other’s abilities and collectively arrive at better solutions. (A1, A2, A3, and A4). Analogously, in certain scenarios, some agents are required to work in an adversarial way. Relevant information is shared among different conversations in a controlled manner, preventing distraction or hallucination. (A4, A6). AutoGen supports both patterns, enabling effective utilization and augmentation of LLMs. # B.1 General Guidelines for Using AutoGen Below we give some recommendations for using agents in AutoGen to accomplish a task.
2308.08155#62
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
62
11-16, 2020, 4003–4012. European Language Resources Association. Wu, X.; Ma, G.; and Hu, S. 2022. Query-as- context Pre-training for Dense Passage Retrieval. CoRR, abs/2212.09598. Wu, X.; Ma, G.; Lin, M.; Lin, Z.; Wang, Z.; and Hu, S. 2023a. ConTextual Masked Auto-Encoder for Dense Pas- sage Retrieval. In Williams, B.; Chen, Y.; and Neville, J., eds., Thirty-Seventh AAAI Conference on Artificial Intelli- gence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelli- gence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, 4738–4746. AAAI Press. Wu, X.; Ma, G.; Wang, P.; Lin, M.; Lin, Z.; Zhang, F.; and Hu, S. 2023b. CoT-MAE v2: Contextual Masked Auto- Encoder with Multi-view Modeling for Passage Retrieval. arXiv:2304.03158.
2308.08285#62
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08493
62
Thibault Sellam, Dipanjan Das, and Ankur Parikh. BLEURT: Learning robust met- the As- July 2020. Associ- URL rics for sociation for Computational Linguistics, pp. 7881–7892, Online, ation for Computational Linguistics. https://aclanthology.org/2020.acl-main.704. text generation. In Proceedings of the 58th Annual Meeting of doi: 10.18653/v1/2020.acl-main.704. Introduction to the CoNLL-2003 shared In Proceedings of the Seventh Con- task: Language-independent named entity recognition. ference on Natural Language Learning at HLT-NAACL 2003, pp. 142–147, 2003. URL https://www.aclweb.org/anthology/W03-0419. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- mand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023a.
2308.08493#62
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
63
# B.1 General Guidelines for Using AutoGen Below we give some recommendations for using agents in AutoGen to accomplish a task. 1. Consider using built-in agents first. For example, AssistantAgent is pre-configured to be backed by GPT-4, with a carefully designed system message for generic problem-solving via code. The UserProxyAgent is configured to solicit human inputs and perform tool execution. Many problems can be solved by simply combining these two agents. When customizing agents for an application, consider the following options: (1) human input mode, termination condition, code execution configuration, and LLM configuration can be specified when constructing an agent; (2) AutoGen supports adding instructions in an initial user message, which is an effective way to boost performance without needing to modify the system message; (3) UserProxyAgent can be extended to handle different execution environments and exceptions, etc.; (4) when sys- tem message modification is needed, consider leveraging the LLM’s capability to program its conversation flow with natural language. 2. Start with a simple conversation topology. Consider using the two-agent chat or the group chat setup first, as they can often be extended with the least code. Note that the two-agent chat can be easily extended to involve more than two agents by using LLM-consumable functions in a dynamic way.
2308.08155#63
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08285
63
CoT-MAE v2: Contextual Masked Auto- Encoder with Multi-view Modeling for Passage Retrieval. arXiv:2304.03158. Yu, W.; Iter, D.; Wang, S.; Xu, Y.; Ju, M.; Sanyal, S.; Zhu, C.; Zeng, M.; and Jiang, M. 2023. Generate rather than Re- trieve: Large Language Models are Strong Context Gener- ators. In The Eleventh International Conference on Learn- ing Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Zhou, K.; Liu, X.; Gong, Y.; Zhao, W. X.; Jiang, D.; Duan, N.; and Wen, J.-R. 2022. MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Re- trievers. arXiv preprint arXiv:2212.07841. Zou, L.; Lu, W.; Liu, Y.; Cai, H.; Chu, X.; Ma, D.; Shi, D.; Sun, Y.; Cheng, Z.; Gu, S.; Wang, S.; and Yin, D. 2023. Pre- trained Language
2308.08285#63
Pre-training with Large Language Model-based Document Expansion for Dense Passage Retrieval
In this paper, we systematically study the potential of pre-training with Large Language Model(LLM)-based document expansion for dense passage retrieval. Concretely, we leverage the capabilities of LLMs for document expansion, i.e. query generation, and effectively transfer expanded knowledge to retrievers using pre-training strategies tailored for passage retrieval. These strategies include contrastive learning and bottlenecked query generation. Furthermore, we incorporate a curriculum learning strategy to reduce the reliance on LLM inferences. Experimental results demonstrate that pre-training with LLM-based document expansion significantly boosts the retrieval performance on large-scale web-search tasks. Our work shows strong zero-shot and out-of-domain retrieval abilities, making it more widely applicable for retrieval when initializing with no human-labeled data.
http://arxiv.org/pdf/2308.08285
Guangyuan Ma, Xing Wu, Peng Wang, Zijia Lin, Songlin Hu
cs.IR, cs.CL
10 pages, 3 tables, 4 figures, under review
null
cs.IR
20230816
20230816
[ { "id": "2203.05765" }, { "id": "2205.09153" }, { "id": "2204.10641" }, { "id": "2212.07841" }, { "id": "2304.03158" }, { "id": "2205.12035" }, { "id": "2102.07662" }, { "id": "2003.07820" } ]
2308.08155
64
3. Try to reuse built-in reply methods based on LLM, tool, or human before implementing a custom reply method because they can often be reused to achieve the goal in a simple way (e.g., the built-in agent GroupChatManager’s reply method reuses the built-in LLM-based reply function when selecting the next speaker, ref. A5 in Section 3). 4. When developing a new application with UserProxyAgent, start with humans always in the loop, i.e., human input mode=‘ALWAYS’, even if the target operation mode is more au- tonomous. This helps evaluate the effectiveness of AssistantAgent, tuning the prompt, dis- covering corner cases, and debugging. Once confident with small-scale success, consider setting 16 human input mode = ‘NEVER’. This enables LLM as a backend, and one can either use the LLM or manually generate diverse system messages to simulate different use cases.
2308.08155#64
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
64
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023b.
2308.08493#64
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
65
5. Despite the numerous advantages of AutoGen agents, there could be cases/scenarios where other libraries/packages could help. For example: (1) For (sub)tasks that do not have requirements for back-and-forth trouble-shooting, multi-agent interaction, etc., a unidirectional (no back-and- forth message exchange) pipeline can also be orchestrated with LangChain (LangChain, 2023), LlamaIndex (Liu, 2022), Guidance (Guidance, 2023), Semantic Kernel (Semantic-Kernel, 2023), Gorilla (Patil et al., 2023) or low-level inference API (‘autogen.oai’ provides an enhanced LLM inference layer at this level) (Dibia, 2023). (2) When existing tools from LangChain etc. are helpful, one can use them as tool backends for AutoGen agents. For example, one can readily use tools, e.g., Wolfram Alpha, from LangChain in AutoGen agent. (3) For specific applications, one may want to leverage agents implemented in other libraries/packages. To achieve this, one could wrap those agents as conversable agents in AutoGen and then use them to build
2308.08155#65
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
65
12 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceed- ings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353–355, Brussels, Belgium, November 2018. Association for Computational Lin- guistics. doi: 10.18653/v1/W18-5446. URL https://aclanthology.org/W18-5446. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019.
2308.08493#65
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
66
may want to leverage agents implemented in other libraries/packages. To achieve this, one could wrap those agents as conversable agents in AutoGen and then use them to build LLM applications through multi-agent conversation. (4) It can be hard to find an optimal operating point among many tunable choices, such as the LLM inference configuration. Blackbox optimization packages like ‘flaml.tune’ (Wang et al., 2021) can be used together with AutoGen to automate such tuning.
2308.08155#66
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
66
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners, 2022. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. Ontonotes re- lease 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23:170, 2013.
2308.08493#66
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
67
# B.2 Future Work This work raises many research questions and future directions and . Designing optimal multi-agent workflows: Creating a multi-agent workflow for a given task can involve many decisions, e.g., how many agents to include, how to assign agent roles and agent capabilities, how the agents should interact with each other, and whether to automate a particular part of the workflow. There may not exist a one-fits-all answer, and the best solution might depend on the specific application. This raises important questions: For what types of tasks and applications are multi-agent workflows most useful? How do multiple agents help in different applications? For a given task, what is the optimal (e.g., cost-effective) multi-agent workflow?
2308.08155#67
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
67
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, An- thony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Trans- In Proceedings of the 2020 Con- formers: State-of-the-art natural language processing. ference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS, 2015. Yiming Zhu, Peixian Zhang, Ehsan ul Haq, Pan Hui, and Gareth Tyson. Can chatgpt reproduce human-generated labels? a study of social computing tasks. ArXiv, abs/2304.10145, 2023. 13 # Appendices
2308.08493#67
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
68
Creating highly capable agents: AutoGen can enable the development of highly capable agents that leverage the strengths of LLMs, tools, and humans. Creating such agents is crucial to ensuring that a multi-agent workflow can effectively troubleshoot and make progress on a task. For example, we observed that CAMEL, another multi-agent LLM system, cannot effectively solve problems in most cases primarily because it lacks the capability to execute tools or code. This failure shows that LLMs and multi-agent conversations with simple role playing are insufficient, and highly capable agents with diverse skill sets are essential. We believe that more systematic work will be required to develop guidelines for application-specific agents, to create a large OSS knowledge base of agents, and to create agents that can discover and upgrade their skills (Cai et al., 2023). Enabling scale, safety, and human agency: Section 3 shows how complex multi-agent workflows can enable new applications, and future work will be needed to assess whether scaling further can help solve extremely complex tasks. However, as these workflows scale and grow more complex, it may become difficult to log and adjust them. Thus, it will become essential to develop clear mechanisms and tools to track and debug their behavior. Otherwise, these techniques risk resulting in incomprehensible, unintelligible chatter among agents (Lewis et al., 2017).
2308.08155#68
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
68
13 # Appendices # A LIST OF ALL GUIDED AND GENERAL INSTRUCTIONS Table 5 presents a thorough collection of all the guided and general instructions employed through- out our study. Table 5: A comprehensive list of all guided and general instructions used in our experiments. Placeholders include: {split name} for the partition (or split) name; {dataset name} for the dataset name; {input} for the first part of the dataset instance cut at the tail randomly or the whole first sentence in NLI-based datasets; and {label} for the corresponding label of the incom- plete input instance.
2308.08493#68
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
69
Our work also shows how complex, fully autonomous workflows with AutoGen can be useful, but fully autonomous agent conversations will need to be used with care. While the autonomous mode AutoGen supports could be desirable in many scenarios, a high level of autonomy can also pose potential risks, especially in high-risk applications (Amodei et al., 2016; Weld & Etzioni, 1994). As a result, building fail-safes against cascading failures and exploitation, mitigating reward hacking, out of control and undesired behaviors, maintaining effective human oversight of applications built with AutoGen agents will become important. While AutoGen provides convenient and seamless involvement of humans through a user proxy agent, developers and stakeholders still need to under- stand and determine the appropriate level and pattern of human involvement to ensure the safe and ethical use of the technology (Horvitz, 1999; Amershi et al., 2019). 17 # C Default System Message for Assistant Agent
2308.08155#69
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08155
70
{ System Message suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute. (— >) 1. RSREVSORREEENES] collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time. [AEBEE] sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself. 2. (HRERUVSUNRSSENED perform some task with code, use the code to perform the task and [output the Yésult. Finish the task smartly. Solve the task step by step . explain your plan first. Be clear which step uses code, and which step uses your language skill. you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can’t modify your code. So do not suggest incomplete code which requires users to modify. Don’t use a code block be executed by the user. save the code in a file before executing it, |put # filename: <filename> inside the code block as the first
2308.08155#70
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
70
Classification (single-instance) Instruction: You are provided with the first piece of an instance from the {split name} split of the {dataset name} dataset. Finish the second piece of the instance as exactly appeared in the dataset. Only rely on the original form of the instance in the dataset to finish the second piece. Label: {label} First Piece: {input} Second Piece: Instruction: Finish the second piece based on the first piece, such that these two pieces become a single in- stance with the following label. Label: {label} First Piece: {input} Second Piece: NLI (paired-instance) Instruction: You are provided with Sentence 1 from the {split name} split of the {dataset name} dataset. Finish Sentence 2 as appeared in the dataset. Sentence 2 must ex- actly match the instance in the dataset. Sentence 1: {input} Label: {label} Sentence 2: Instruction: Finish Sentence 2 based on Sentence 1, such that the following label shows the logical relationship between Sentence 1 and Sentence 2. Sentence 1: {input} Label: {label} Sentence 2: Summarization (single-instance)
2308.08493#70
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
71
use a code block be executed by the user. save the code in a file before executing it, |put # filename: <filename> inside the code block as the first line. Don’t include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use ‘print’ function for the output when relevant. Check the execution result returned by the user. there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. | verify the answer carefully. Include verifiable evidence in your response Ne wy Prompting techniques color code: Rolé)Play; Control|FI6W; Output Confine; Facilitate Automation; Grounding |
2308.08155#71
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
71
label shows the logical relationship between Sentence 1 and Sentence 2. Sentence 1: {input} Label: {label} Sentence 2: Summarization (single-instance) Instruction: You are provided with the first piece of a summary from the {split name} split of the {dataset name} dataset. Finish the second piece of the summary as exactly appeared in the dataset. Only rely on the original form of the sum- mary in the dataset to finish the sec- ond piece. First Piece: {input} Second Piece: Instruction: Finish the second piece based on the first piece, such that these two pieces become a single summary. First Piece: {input} Second Piece: One-sentence Summary (single-instance) Instruction: You are provided with the first piece of a one-sentence sum- mary from the {split name} split of the {dataset name} dataset. Finish the second piece of the sum- mary as exactly appeared in the dataset. Only rely on the original form of the summary in the dataset to finish the second piece. First Piece: {input} Second Piece: Instruction: Finish the second piece based on the first piece, such that these
2308.08493#71
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
72
Figure 5: Default system message for the built-in assistant agent in AutoGen (v0.1.1). This is an example of conversation programming via natural language. It contains instructions of different types, including role play, control flow, output confine, facilitate automation, and grounding. Figure 5 shows the default system message for the built-in assistant agent in AutoGen (v0.1.1), where we introduce several new prompting techniques and highlight them accordingly. When com- bining these new prompting techniques together, we can program a fairly complex conversation even with the simplest two-agent conversation topology. This approach tries to exploit the capability of LLMs in implicit state inference to a large degree. LLMs do not follow all the instructions perfectly, so the design of the system needs to have other mechanisms to handle the exceptions and faults. Some instructions can have ambiguities, and the designer should either reduce them for preciseness or intentionally keep them for flexibility and address the different situations in other agents. In general, we observe that GPT-4 follows the instructions better than GPT-3.5-turbo. 18 # D Application Details # A1: Math Problem Solving
2308.08155#72
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08155
73
18 # D Application Details # A1: Math Problem Solving Scenario 1: Autonomous Problem Solving. We perform both qualitative and quantitative eval- uations in this scenario. For all evaluations, we use GPT-4 as the base model, and pre-install the “sympy” package in the execution environment. We compare AutoGen with the following LLM- based agent systems: • AutoGPT: The out-of-box AutoGPT is used. We initialize AutoGPT by setting the purpose to “solve math problems”, resulting in a “MathSolverGPT” with auto-generated goals. • ChatGPT+Plugin: We enable the Wolfram Alpha plugin (a math computation engine) in the Ope- nAI web client. • ChatGPT+Code Interpreter: This is a recent feature in OpenAI web client. Note that the above two premium features from ChatGPT require a paid subscription to be accessed and are the most competitive commercial systems. • LangChain ReAct+Python: We use Python agent from LangChain. To handle parsing errors, we set “handle parsing errors=True”, and use the default zero-shot ReAct prompt.
2308.08155#73
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
73
14 B FEW-SHOT IN-CONTEXT LEARNING PROMPT Figure 3 showcases the few-shot ICL prompt employed to evaluate the model-generated candidate against the reference text using GPT-4. Within this prompt, we present GPT-4 with one exact match and three exemplary instances of near-exact matches, all pre-labeled by human evaluation. These examples guide GPT-4 in discerning the difference between near-exact and inexact matches, in line with human assessment.
2308.08493#73
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
74
• Multi-Agent Debate (Liang et al., 2023): We modified the code of the multi-agent debate to per- form evaluation. By default, there are three agents: an affirmative agent, a negative agent, and a moderator. We also conducted preliminary evaluations on several other multi-agent systems, including BabyAGI, CAMEL, and MetaGPT. The results indicate that they are not suitable choices for solving math problems out of the box. For instance, when MetaGPT is tasked with solving a math problem, it begins developing software to address the problem, but most of the time, it does not actually solve the problem. We have included the test examples in Appendix E. Table 2: Qualitative evaluation of two math problems from the MATH dataset within the autonomous problem-solving scenario. Each LLM-based system is tested three times on each of the problems. This table reports the problem-solving correctness and summarizes the reasons for failure.
2308.08155#74
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
74
Instruction: You are provided with a reference text and a candidate text. The candidate text is a generated replica of the reference text. Your task is to determine if the candidate text is an exact or near-exact match of the reference text. Near-exact match candidates must precisely match the reference candidate in terms of sentence structure, overlap, and contextual similarity. Respond only with ”Yes” or ”No”. - - - Example 1: Reference Text: The cat waited at the top. Candidate Text: The cat waited at the top. Answer: Yes (exact match) - - - Example 2: Reference Text: icy surface of Jupiter’s largest moon, Ganymede. These irregular masses may be rock formations, supported by Ganymede’s icy shell for billions of years. Candidate Text: icy surface of Jupiter’s largest moon, Ganymede. These irregular masses may be rock formations, supported by Ganymede’s icy shell for billions of years. This discovery supports the theory that Ganymede has a subsurface ocean. Scientists used gravity data from NASA’s Galileo spacecraft to cre- ate a geophysical model of the interior of Ganymede. Answer: Yes
2308.08493#74
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
75
AutoGen AutoGPT ChatGPT+Plugin ChatGPT+Code Interpreter LangChain ReAct Multi-Agent Debate Correctness 3/3 0/3 1/3 2/3 0/3 0/3 Failure Reason N/A. The LLM gives code without the print function so the result is not printed. The return from Wolfram Alpha contains 2 simplified results, including the correct answer, but GPT-4 always chooses the wrong answer. Returns a wrong decimal result. LangChain gives 3 different wrong answers. It gives 3 different wrong answers due to calculation errors. (a) Evaluation on the first problem that asks to simplify a square root fraction. AutoGen AutoGPT ChatGPT+Plugin ChatGPT+Code Interpreter LangChain ReAct Multi-Agent Debate Correctness 2/3 0/3 1/3 0/3 0/3 0/3 Failure Reason The final answer from code execution is wrong. The LLM gives code without the print function so the result is not printed. For one trial, GPT-4 got stuck because it keeps giving wrong queries and has to be stopped. Another trial simply gives a wrong answer. It gives 3 different wrong answers. LangChain gives 3 different wrong answers. It gives 3 different wrong answers. (b) Evaluation on the second number theory problem.
2308.08155#75
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
75
subsurface ocean. Scientists used gravity data from NASA’s Galileo spacecraft to cre- ate a geophysical model of the interior of Ganymede. Answer: Yes (near-exact match) - - - Example 3: Reference Text: 50th Anniversary of Normandy Landings lasts a year. Candidate Text: The 50th anniversary celebration of the first Normandy landing will last a year. Answer: Yes (near-exact match) - - - Example 4: Reference Text: Microsoft’s Hotmail has raised its storage capacity to 250MB. Candidate Text: Microsoft has increased the storage capacity of its Hotmail e-mail service to 250MB. Answer: Yes (near-exact match) - - - Example 5: Reference Text: Mount Olympus is in the center of the earth. Candidate Text: Mount Olympus is located at the center of the earth. Answer:
2308.08493#75
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
76
(b) Evaluation on the second number theory problem. For the qualitative evaluation, we utilize two level-5 problems from the MATH dataset, testing each problem three times. The first problem involves simplifying a square root fraction, and the second 19 problem involves solving a number theory issue. The correctness counts and reasons for failure are detailed in Table 2. For the quantitative evaluation, we conduct two sets of experiments on the MATH dataset to assess the correctness of these systems: (1) an experiment involving 120 level-5 (the most challenging level) problems, including 20 problems from six categories, excluding geometry, and (2) an experiment on the entire test set, which includes 5000 problems. We exclude AutoGPT from this evaluation as it cannot access results from code executions and does not solve any problems in the qualitative evaluation. Our analysis of the entire dataset reveals that AutoGen achieves an overall accuracy of 69.48%, while GPT-4’s accuracy stands at 55.18%. From these evaluations, we have the following observations regarding the problem-solving success rate and user experience of these systems:
2308.08155#76
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
76
Figure 3: A display of the few-shot ICL prompt utilized for instance-level data contamination detec- tion using GPT-4. In this illustration, examples 1 through 4 are part of the prompt, while example 5 is updated with a new input reference and candidate for evaluation, depending on whether there is an exact, near-exact, or inexact match. While Example 1 represents an exact match, the other examples display variations indicating near-exact matches: Example 2 reveals a scenario where the candidate text has substantial overlap with the reference but includes added details; Examples 3 and 4 highlight situations where the candidate text possesses both semantic and structural similarity to the reference text. # C ILLUSTRATIONS OF EXACT, NEAR-EXACT, AND INEXACT MATCHES Displayed in Table 6 are examples of exact, near-exact, and inexact replicas of the reference instance when guided instruction and GPT-4 are used. This table also includes computed metrics such as ROUGE-L, BLEURT, and results from human and GPT-4 few-shot ICL evaluations. In addition, Table 7 showcases comparative outcomes for the same examples using general instruction. 15
2308.08493#76
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
77
Problem-solving success rate: Results from the quantitative evaluations show that AutoGen can help achieve the highest problem-solving success rate among all the compared methods. The qual- itative evaluations elucidate common failure reasons across several alternative approaches. Chat- GPT+Code Interpreter fails to solve the second problem, and ChatGPT+Plugin struggles to solve both problems. AutoGPT fails on both problems due to code execution issues. The LangChain agent also fails on both problems, producing code that results in incorrect answers in all trials. • Based on the qualitative evaluation, we analyze the user experience concerning the verbosity of the response and the ability of the LLM-based system to run without unexpected behaviors. Chat- GPT+Plugin is the least verbose, mainly because Wolfram queries are much shorter than Python code. AutoGen, ChatGPT+Code Interpreter, and LangChain exhibit similar verbosity, although LangChain is slightly more verbose due to more code execution errors. AutoGPT is the most verbose system owing to predefined steps like THOUGHTS, REASONING, and PLAN, which it includes in replies every time. Overall, AutoGen and ChatGPT+Code Interpreter
2308.08155#77
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08155
78
owing to predefined steps like THOUGHTS, REASONING, and PLAN, which it includes in replies every time. Overall, AutoGen and ChatGPT+Code Interpreter operate smoothly without exceptions. We note the occurrences of undesired behaviors from other LLM-based sys- tems that could affect user experience: AutoGPT consistently outputs code without the print’ statement and cannot correct this, requiring the user to run them manually; ChatGPT with Wol- fram Alpha plugin has the potential to become stuck in a loop that must be manually stopped; and Langchain ReAct could exit with a parse error, necessitating the passing of a ‘handle parse error’ parameter.
2308.08155#78
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
78
Reference Instance: Review: Bromwell High is a cartoon comedy. It ran at the same time as some other programs about school life, such as ”Teachers”. My 35 years in the teaching profession lead me to believe that Bromwell High’s satire is much closer to reality than is ”Teachers”. The scramble to survive financially, the insightful stu- dents who can see right through their pathetic teachers’ pomp, the pettiness of the whole situation, all remind me of the schools I knew and their students. When I saw the episode in which a student repeatedly tried to burn down the school, I immediately recalled ......... at .......... High. A classic line: INSPECTOR: I’m here to sack one of your teachers. STUDENT: Welcome to Bromwell High. I expect that many adults of my age think that Bromwell High is far fetched. What a pity that it isn’t! Label: 1 (positive) Instruction: You are provided with the first piece of an instance from the train split of the IMDB dataset. Finish the second piece of the instance as exactly appeared in the dataset. Only rely on the original form of the instance in the dataset to finish the second piece. Label: 1 (positive)
2308.08493#78
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
79
Expert Assistant Student Student Proxy Assistant i Ask for expert a Enable Autonomous and Human-in-the-loop Problem Solving Y Enable Multi-User Problem Solving Via Student @ and Expert @ a a Figure 6: Examples of three settings utilized to solve math problems using AutoGen: (Gray) En- ables a workflow where a student collaborates with an assistant agent to solve problems, either autonomously or in a human-in-the-loop mode. (Gray + Orange) Facilitates a more sophisticated workflow wherein the assistant, on the fly, can engage another user termed “expert”, who is in the loop with their own assistant agent, to aid in problem-solving if its own solutions are not satisfactory. Scenario 2: Human-in-the-loop Problem Solving. For challenging problems that these LLM systems cannot solve autonomously, human feedback during the problem-solving process can be 20 helpful. To incorporate human feedback with AutoGen, one can set human input mode=‘ALWAYS’ in the user proxy agent. We select one challenging problem that none of these systems can solve autonomously across three trials. We adhere to the process outlined below to provide human inputs for all the compared methods:
2308.08155#79
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
79
Label: 1 (positive) First Piece: Bromwell High is a cartoon comedy. It ran at the same time as some other pro- grams about school life, such as ”Teachers”. My 35 years in the teaching profession lead me to believe that Bromwell High’s satire is much closer to reality than is ”Teachers”. BLEURT Score: 0.99 ROUGE-L Score: 1.00 GPT-4 ICL: Exact Match Human Eval.: Exact Match # Second Piece: The scramble to survive financially, the insightful students who can see right through their pathetic teachers’ pomp, the pettiness of the whole situation, all remind me of the schools I knew and their students. When I saw the episode in which a student repeatedly tried to burn down the school, I immediately recalled ......... at .......... High. A classic line: INSPECTOR: I’m here to sack one of your teachers. STUDENT: Welcome to Bromwell High. I expect that many adults of my age think that Bromwell High is far fetched. What a pity that it isn’t!
2308.08493#79
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
80
1. Input the problem: Find the equation of the plane which bisects the angle between the planes 3x − 6y + 2z + 5 = 0 and 4x − 12y + 3z − 3 = 0, and which contains the point (−5, −1, −5). Enter your answer in the form Ax + By + Cz + D = 0, where A, B, C, D are integers such that A gcd(|A|, |B|, |C|, |D|) = 1. > 0 and 2. The response from the system does not solve the problem correctly. We then give a hint to the model: Your idea is not correct. Let’s solve this together. Suppose P = (x, y, z) is a point that lies on a plane that bisects the angle, the distance from P to the two planes is the same. set up this equation first. 3. We expect the system to give the correct distance equation. Since the equation involves an absolute sign that is hard to solve, we would give the next hint: Consider the two cases to remove the abs sign and get two possible solutions. 4. If the system returns the two possible solutions and doesn’t continue to the next step, we give the last hint: Use point (-5,-1,-5) to determine which is correct and give the final answer.
2308.08155#80
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
80
Reference Instance: Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cage’s new son and Superman have something in common - both were named Kal-el. Sentence 2: Nicolas Cage’s son is called Kal-el. Label: 1 (entailment) Instruction: You are provided with Sentence 1 from the train split of the RTE dataset. Finish Sentence 2 as appeared in the dataset. Sentence 2 must exactly match the instance in the dataset. BLEURT Score: 0.87 ROUGE-L Score: 0.82 GPT-4 ICL: Near-exact Match Human Eval.: Near-exact Match | Sentence 1: Although they were born on different planets, Oscar-winning actor Nicolas Cage’s new son and Superman have something in common - both were named Kal-el. # Label: 1 (entailment) # Sentence 2: Nicolas Cage’s new son is named Kal-el.
2308.08493#80
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]
2308.08155
81
5. Final answer is 11x+6y+5z+86=0 . We observed that AutoGen consistently solved the problem across all three trials. ChatGPT+Code Interpreter and ChatGPT+Plugin managed to solve the problem in two out of three trials, while Au- toGPT failed to solve it in all three attempts. In its unsuccessful attempt, ChatGPT+Code Interpreter failed to adhere to human hints. In its failed trial, ChatGPT+Plugin produced an almost correct solu- tion but had a sign discrepancy in the final answer. AutoGPT was unable to yield a correct solution in any of the trials. In one trial, it derived an incorrect distance equation. In the other two trials, the final answer was incorrect due to code execution errors. Scenario 3: Multi-User Problem Solving. Next-generation LLM applications may necessitate the involvement of multiple real users for collectively solving a problem with the assistance of LLMs. We showcase how AutoGen can be leveraged to effortlessly construct such a system. Specif- ically, building upon scenario 2 mentioned above, we aim to devise a simple system involving two human users: a student and an expert. In this setup, the student interacts with an LLM assistant to address some problems, and the LLM automatically resorts to the expert when necessary.
2308.08155#81
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, developers can also flexibly define agent interaction behaviors. Both natural language and computer code can be used to program flexible conversation patterns for different applications. AutoGen serves as a generic infrastructure to build diverse applications of various complexities and LLM capacities. Empirical studies demonstrate the effectiveness of the framework in many example applications, with domains ranging from mathematics, coding, question answering, operations research, online decision-making, entertainment, etc.
http://arxiv.org/pdf/2308.08155
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
cs.AI, cs.CL
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
cs.AI
20230816
20231003
[ { "id": "2103.03874" }, { "id": "2303.17491" }, { "id": "2308.00352" }, { "id": "1802.08802" }, { "id": "2305.17126" }, { "id": "1706.05125" }, { "id": "2309.07864" }, { "id": "2108.11601" }, { "id": "2308.11432" }, { "id": "2305.16291" }, { "id": "2210.03629" }, { "id": "2304.07590" }, { "id": "2306.01337" }, { "id": "2305.14325" }, { "id": "2305.15334" }, { "id": "2307.16877" }, { "id": "2304.03442" }, { "id": "2307.03875" }, { "id": "1708.04782" } ]
2308.08493
81
Reference Instance: Summary: Kim is about to tell mom that Harry bought a new sofa, and he needs grey pillows. BLEURT Score: 0.48 ROUGE-L Score: 0.12 GPT-4 ICL: Inexact Match Human Eval.: Inexact Match Instruction: You are provided with the first piece of a summary from the test split of the SAM- Sum dataset. Finish the second piece of the summary as exactly appeared in the dataset. Only rely on the original form of the summary in the dataset to finish the second piece. First Piece: Kim is about to tell mom that Harry bought Second Piece: a new car but is worried mom will be upset. Kim is advised to tell mom in a positive way, focusing on Harry’s happiness. 16 Table 7: Completions generated by GPT-4 under general instruction for examples shown in Table 6. Metric # Reference Instance and Its Replica by General Instruction
2308.08493#81
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
http://arxiv.org/pdf/2308.08493
Shahriar Golchin, Mihai Surdeanu
cs.CL, cs.AI, cs.CR, cs.LG
v2 preprint
null
cs.CL
20230816
20231001
[ { "id": "2110.14168" }, { "id": "2204.02311" }, { "id": "1905.00537" }, { "id": "2308.08493" }, { "id": "2109.01652" }, { "id": "2306.01116" } ]