id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2309.10818#24
SlimPajama-DC: Understanding Data Combinations for LLM Training
11 Neo [3] RedPaj. [7] DC-1 DC-2 DC-3 DC-4 DC-5 DC-6 LBS 7B 9.5 35.0 74.7 44.3 66.9 77.4 38.2 64.4 39.8 86.0 85.0 73.8 54.7 55.3 61.2 Table 6: Results of six dataset combination configurations of 1.3B models and our LBS-7B model details are presented in Sec. 6. Bigbench is evaluated under 3-shot using the average of multiple choice grade. Arc easy and arc challenge are evaluated using 5-shot, 25-shot, and 25-shot, respectively. All other eval- uation benchmarks are tested on 0-shot. * represents the results are averaged across multiple sub-items inside each benchmark dataset. Risk of random guessing score on 1.3B models. It is widely recognized that small models, such as the 1.3B variant, may struggle to achieve satisfactory predictions on specific benchmarks like MMLU. Their results could resem- ble random choices, not truly capturing the modelâ s actual capabilities. To more accurately showcase a modelâ s true potential and reflect the ability of different data combinations, we introduce a novel metric RRGS (risk of ran- dom guessing score) to evaluate the degree of random guessing. Since 25% in MMLU represents the baseline score for a guess, this metric evaluates the variance using average â 1 distance around this base value across all sub-items. A larger variance would suggest a reduced likelihood of predictions resulting from mere chance. Given a MMLU score vector X of length N with sub-item scores s1, s2, . . . , sn, RRGS can be formulated as: 1 N RRGS = 1 ~ > Dulles â 0.25]) (2) where i is the index of sub-item in MMLU and N is the number of items of MMLU. This metric utilizes the probabilities of variance to baseline 25%, aim- ing to assess the extent to which a modelâ s prediction resembles random guess- ing on the MMLU benchmark. The metric has three variations: (1) Consider only items with scores exceeding 25%, i.e., i â {positive item set}.
2309.10818#23
2309.10818#25
2309.10818
[ "2302.13971" ]
2309.10818#25
SlimPajama-DC: Understanding Data Combinations for LLM Training
(2) Focus solely on items with scores less than 25%, i.e., i â {negative item set}. (3) In- clude all items and sum them up. The results are shown in Table 7. Generally, a model with a higher MMLU average score will have a low risk of random 12 guessing probability. It is also crucial to employ a broader and more diverse set of benchmarks, such as in Table 6. Additionally, for a detailed understanding, we have cata- loged the complete MMLU results for every sub-item in Table 12. This offers a lens into the knowledge assimilated by the pretrained models within each sub-domain on this comprehensive benchmark. DC-1 DC-2 DC-3 DC-4 DC-5 DC-6 0.262 0.27 0.964 0.963 0.973 0.974 0.968 0.967 MMLU RRGSpos RRGSneg RRGSall 0.257 0.964 0.973 0.968 0.256 0.968 0.975 0.971 0.257 0.965 0.974 0.969 0.260 0.970 0.969 0.970 RRGS,, | 0.968 0.968 0.971 0.969 0.970 | 0.967 7: Evlauation of random guessing probability on sub-items of MMLU. Training Loss 3.0 Dc-1 @ Dc-3 @ 28 pea @ bcs @ 2.0 DC-6 e 0 20k 40k 60k 80k 100k 120k 140k # Table # 5.3 Training Loss Figure 3: Illustration of training loss curves. DC-2â s curve closely resembles those of DC-3 and 5, so it has been excluded from the figure for clarity. Fig. 3 presents the training loss curves for various data combinations, from which several insights can be observed: 1) While DC-6 demonstrated the high- est average accuracy in our quantitative evaluations, its training loss was also the most substantial. This suggests that a lower training loss doesnâ
2309.10818#24
2309.10818#26
2309.10818
[ "2302.13971" ]
2309.10818#26
SlimPajama-DC: Understanding Data Combinations for LLM Training
t necessar- ily correlate directly with superior model performance. 2) DC-4, with a con- siderable portion of its data coming from code domain, exhibited the lowest training loss. This implies that as the amount of code in training increases, the training loss diminishes. 3) The training loss values for other combinations appeared to be relatively consistent with one another. 13 # 6 Application: Large Batch-size Training on 7B # 7B Training Data Combination Our 7B large batch size (LBS) training dataset is primarily based on Slimpa- jama, however, to obtain a sufficient proportion of web text, we have incor- porated additional web data from the Commoncrawl corpus in RedPajama. We have also adjusted the proportions of various data sources in line with our 1.3B model training. For instance, we elevate the sampling frequency of Github and Wikipedia and increase the diversity of data sources by adding S2orc [25] and Stack-Markdown [21] following [38], as detailed in Table 8.
2309.10818#25
2309.10818#27
2309.10818
[ "2302.13971" ]
2309.10818#27
SlimPajama-DC: Understanding Data Combinations for LLM Training
Itâ s crucial to understand that our primary focus is not solely on achieving the best perfor- mance. Instead, we place a higher emphasis on optimizing data combinations and ensuring the convergence of training large language models with large batch sizes. Consequently, we continue to utilize the SlimPajama/RedPajama Commoncrawl instead of higher-quality RefinedWeb. dataset Slimpj.Arxiv Slimpj.StackExchanges Slimpj.Github Slimpj.Wikipedia Slimpj.Books Slimpj.C4 S2orc Markdown Slimpj.CC Redpaj.CC (ext.) Total proportion 4% (54B) 3.2% (43B) 4.9% (66B) 7.5% (101B) 4.3% (57B) 17.6% (236B) 3% (40B) 3% (40B) 34.5% (462B) 18% (241B) 1.34T Table 8: Data combination of 7B model training in large batch size style. # 7B Model Training Configurations
2309.10818#26
2309.10818#28
2309.10818
[ "2302.13971" ]
2309.10818#28
SlimPajama-DC: Understanding Data Combinations for LLM Training
Architecture. For the 7B model training, we adopt MPT architecture [38], the max sequence length is 2,048. We use Triton [35] with Flash Attention [8] as the self-attention implementation. Alibi is enabled to make model more flexible for input length extrapolation. The modelâ s total number of parameters is 6.7B. Tokenizer. The tokenizer used for 7B training is adapted GPT-NeoX-20b. Fol- lowing [38], the modelâ s vocabulary size is adjusted to 50,432 for improved mfu and leaving a few tokens available that can be used in subsequent training. Optimizer. We employ the AdamW optimizer to train our models, adopting these specific hyper-parameters: β1 set at 0.9 and β2 at 0.95. We adopt a learn- ing rate schedule that traces a cosine pattern, concluding with a learning rate that is 10% of its maximum value. Along with this, we use a multi-stage weight
2309.10818#27
2309.10818#29
2309.10818
[ "2302.13971" ]
2309.10818#29
SlimPajama-DC: Understanding Data Combinations for LLM Training
14 decay scheduler as described in Sec. 6.4, cap the gradient with a clipping value of 1.0, and use a warmup spanning 2,000 steps. System and platform. For our 7B model training with a large batch size, we use 232 NVIDIA A100 GPUs (80G). We employ llm-foundry [37] as the training platform. We use FSDP with activation checkpointing enabled to save memory consumption. We also use the automatic mixed precision of bf16 in training. # 6.3 Fast Training with Large Batch-size Large batch training allows a larger learning rate, leading to a faster conver- gence of large models. Also, utilizing a larger batch size can optimize hardware resource usage to make training procedures more efficient. Additionally, fewer batches are required, which further accelerates the training process. As shown in Table 9, our large batch training scheme achieves much higher throughput and mfu than LLaMA [36] and MPT [38] with fewer total training GPU hours. Overall, in a convex optimization framework, leveraging a larger portion of the dataset typically leads to enhanced results. However, for most large deep models that involve non-convex optimizations, the precise nature of the loss landscape remains elusive, making the scenario more intricate. Many prior works [17, 19] have noticed that training with larger batches often results in overfitting compared to those using smaller batch sizes for the same network. When utilizing large batch training, there is a propensity for the model to be- come stuck or even gravitate towards potential saddle points within the loss landscape. While large batch training methods often focus on the nearest rel- ative minima they encounter, networks trained with smaller batches usually navigate the loss landscape more thoroughly before committing to an optimal minimum. The minima reached through large batch training can be distinctly different from those achieved with smaller batch training methods. In the fol- lowing, we introduce an approach to mitigate overfitting when training large language models in a large batch-size scheme. model LLaMA-7B MPT-7B LBS-7B (ours) batch size 4M 4M 14M # GPUs (A100-80G) â 232 232 throughput mfu â 3,310 3,626 â 0.4575 0.5011 GPU-hours 82,432 84.351 76,999
2309.10818#28
2309.10818#30
2309.10818
[ "2302.13971" ]
2309.10818#30
SlimPajama-DC: Understanding Data Combinations for LLM Training
Table 9: Training speed of throughput (tokens per sec on each GPU), model FLOPs utilization (mfu) [5] and total GPU-hours (per trillion training tokens). # 6.4 Progressive Training on Weight Decay Prior work [24] observed that dropout operation is utilized only in the early stages of training and is deactivated in subsequent phases. Models that incor- porate this early dropout strategy tend to exhibit reduced final training loss compared to models that do not use dropout. In contrast to this, our approach
2309.10818#29
2309.10818#31
2309.10818
[ "2302.13971" ]
2309.10818#31
SlimPajama-DC: Understanding Data Combinations for LLM Training
15 wu time/token 200G 400G 600G 800G 17 Figure 4: Loss curve of our LBS-7B training. emphasizes the role of weight decay during large model training. We intro- duce a novel training strategy for large language models, wherein the training process is segmented into various stages. Within each stage, a distinct weight decay is applied to the model to serve specific objectives. Weâ ve termed this approach Progressive Training on Weight Decay (PTWD).
2309.10818#30
2309.10818#32
2309.10818
[ "2302.13971" ]
2309.10818#32
SlimPajama-DC: Understanding Data Combinations for LLM Training
Owing to this method- ology, our model, even when trained with a large batch size and extremely small iterations, achieves smooth convergence. As illustrated in Fig. 4, our training strategy consists of three distinct phases. Initially, we negate weight decay by setting it to zero and allow the model to train until full convergence is achieved. It usually can reach a lower loss level within this stage compared to using weight decay, even if it slightly overfits. Following this, in the sec- ond phase, we introduce a substantial weight decay, with a value of 0.5 in our experiments, to suppress the overfitting. Once the loss values stabilize, we transition to the third phase, wherein a standard weight decay of 0.1 is imple- mented, a value consistent with many other LLMs training. Intriguing, each phase spontaneously converges to roughly 1/3 of the total training budget, ensuring effective allocation of training budget throughout the process. # 6.5 Results of Pre-training and Instruction Tuning The results from our pretraining and subsequent instruction tuning on ShareGPT dataset are presented in Table 10. Notably, after instruction tuning, there is a significant enhancement in MMLU and TruthfulQA metrics. In contrast, the performance on ARC and HellaSwag has a slight decrease. On the whole, the average accuracy witnessed a substantial boost following instruction tuning. More evaluation results on the pretrained LBS model are provided in Table 6.
2309.10818#31
2309.10818#33
2309.10818
[ "2302.13971" ]
2309.10818#33
SlimPajama-DC: Understanding Data Combinations for LLM Training
16 Model Ours-LBS-7B-Base Ours-LBS-7B-Instruct Average ARC HellaSwag MMLU TruthfulQA 44.1 46.4 44.3 43.5 69.8 68.0 26.1 32.1 36.1 42.1 Table 10: Results of our large batch-size (LBS) trained 7B models following Huggingface Leaderboard Evaluation [12] using Harness [14]. # 7 Related Work # 7.1 RedPajama, SlimPajama and Others. RedPajama [7] aims to develop open-source large language models and be- gins by replicating the LLaMA training dataset [36], which boasts over 1.2 tril- lion tokens. This collaborative effort involves entities such as Together, Onto- cord.ai, ETH DS3Lab, Stanford CRFM, Hazy Research, and the MILA Qu´ebec AI Institute. SlimPajama [33] stands as the highly deduplicated, multi-source, open-source dataset tailored for training large language models. This dataset emerged by refining and eliminating duplicates from the whole 1.2T token RedPajama dataset. Through meticulous filtering of subpar data and repeti- tive content, it reduced the dataset size by 49.6%, scaling it down from 1.2T to 627B tokens. SlimPajama provides superior quality and computational ef- ficiency for training tasks than the original RedPajama dataset. Other efforts also have been made in this direction to construct diverse datasets, such as Pile [13]. It is an English text corpus of 825 GiB, which is designed for the train- ing of large-scale language models with increased training dataset diversity to improve general cross-domain knowledge and downstream generalization ca- pability. It contains a combination of 22 distinct, high-quality subsets. These subsets incorporate both pre-existing and freshly curated data, with a signifi- cant portion sourced from scholarly or professional domains. # 7.2 Data Processing and Optimization Approaches There have been several advancements in data processing and optimization. The seminal method of importance sampling [20] stands out as a Monte Carlo approach designed to evaluate attributes of a particular distribution, even when the samples are drawn from a distribution that differs from the one under ex- ploration.
2309.10818#32
2309.10818#34
2309.10818
[ "2302.13971" ]
2309.10818#34
SlimPajama-DC: Understanding Data Combinations for LLM Training
SlimPajamaâ s deduplication mechanism is an adaptation of impor- tance sampling, incorporating a heuristic that values unique data points. Re- cently, several data selection frameworks [18, 15, 34, 40] have been introduced, inspired by the concept of importance sampling. Among them, DSIR [40] presents a framework for the data selection challenge by aiming to choose a subset from a large, unlabeled raw dataset that aligns with a specific target distribution, given a set of unlabeled target examples. It builds upon the tra- ditional importance resampling method, adapting it for data selection in large- scale models. DSIR operates as a scalable algorithm, determining importance weights within a reduced feature space and then selecting data based on these
2309.10818#33
2309.10818#35
2309.10818
[ "2302.13971" ]
2309.10818#35
SlimPajama-DC: Understanding Data Combinations for LLM Training
17 importance resampling weights. In [34], the authors delve into the relationship between error scaling and dataset size. Their theoretical exploration suggests that by using a robust data pruning metric, which prioritizes which training examples to remove, the proposed method can suppress traditional power law scaling, potentially reaching exponential scaling for pruned dataset sizes. # 7.3 Data Combination for Training Large Language Models The training of large language models, such as GPT [29, 30, 4] and BERT [10], requires significant amounts of data to capture and generalize over the vast in- tricacies of human language. As a result, researchers often combine data from various sources, such as web text, Github, Books, ArXiv, Wikipedia, etc. There are some related work and difficulties that have been explored in the context of data combination for training large language models. (1) Concatenation of diverse datasets: One of the simplest methods for combining data is to concate- nate various corpora, covering diverse topics, styles, and sources. This ensures that the model gets a broad view of the language. (2) WebText and similar cor- pora: For OpenAIâ s GPT-2, a dataset called WebText [30] was curated by scrap- ing content from the internet. This kind of data provides a rich mix of formal, informal, factual, and opinionated text, thus offering diverse training material. (3) Balancing and weighting: Simply combining data may lead to issues if one source is overrepresented. Prior studies have applied weights to different data portions or ensure that the combined dataset is balanced in terms of sources, styles, and other criteria. For instance, DoReMi [39] first trains a small proxy model using group distributionally robust optimization across domains, gen- erating domain weights (or mixture proportions) without relying on informa- tion from subsequent tasks. Following this, they utilize these domain weights to resample a dataset, on which then train a full-size model. (4) Multimodal Training: Combining text with other data forms, like images or sounds, can also enhance language model training, especially for tasks that require under- standing across modalities. # 7.4 Large Batch Training for Large Language Models Large language models inherently possess a structure that supports paralleliza- tion, especially when optimized using techniques that allow for batch training.
2309.10818#34
2309.10818#36
2309.10818
[ "2302.13971" ]
2309.10818#36
SlimPajama-DC: Understanding Data Combinations for LLM Training
When computational resources permit, large batch sizes are favored to expe- dite the training of large models containing potentially millions or billions of parameters. At a fundamental level, larger batch sizes enhance the quality of each gradient update since they consider a more considerable chunk of the dataset. Conversely, a smaller batch size means that model parameter updates are based on gradients derived from a limited dataset portion. This smaller dataset slice might not comprehensively capture the intricate relationships be- tween features and labels. Therefore, it might seem that larger batch sizes con- sistently offer advantages in training. However, [19] pointed out that this per- spective does not factor in the modelâ s capacity to generalize to new, unseen
2309.10818#35
2309.10818#37
2309.10818
[ "2302.13971" ]
2309.10818#37
SlimPajama-DC: Understanding Data Combinations for LLM Training
18 data, nor the intricate, non-convex optimization landscape of contemporary large models. In practice, multiple studies [17, 19] have demonstrated that while larger batch sizes might hasten convergence, they can impair a modelâ s generalization to new datasets, irrespective of the deep network type. This ob- served disparity has been named as the Generalization Gap. A method [17] to address this gap involves starting from a smaller batch size and gradually en- larging it as training advances. In our study, we explore this problem through a new and unique angle of progressive weight decay training.
2309.10818#36
2309.10818#38
2309.10818
[ "2302.13971" ]
2309.10818#38
SlimPajama-DC: Understanding Data Combinations for LLM Training
# 8 Conclusion We have presented SlimPajama-DC, a comprehensive study on understanding the data domain weights and combinations for training large language models. Notably, SlimPajama-DC can operate on compact models, and its advantages can be seamlessly transferred to models that are several times larger. This leads to a remarkable acceleration in training on the SlimPajama with the optimal sampling probabilities across domains for larger models. Through this, we aim to spark further exploration into data-centric methods to enhance the efficiency of large language model training.
2309.10818#37
2309.10818#39
2309.10818
[ "2302.13971" ]
2309.10818#39
SlimPajama-DC: Understanding Data Combinations for LLM Training
# References [1] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397â 2430. PMLR, 2023. 7 [2] Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Lau- rence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. arXiv preprint Gpt-neox-20b: An open-source autoregressive language model. arXiv:2204.06745, 2022. 10 [3] Sid Black, Gao Leo, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo:
2309.10818#38
2309.10818#40
2309.10818
[ "2302.13971" ]
2309.10818#40
SlimPajama-DC: Understanding Data Combinations for LLM Training
Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, Mar. 2021. If you use this software, please cite it using these metadata. 11, 12 [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural informa- tion processing systems, 33:1877â
2309.10818#39
2309.10818#41
2309.10818
[ "2302.13971" ]
2309.10818#41
SlimPajama-DC: Understanding Data Combinations for LLM Training
1901, 2020. 7, 9, 18 [5] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebas- tian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. 15
2309.10818#40
2309.10818#42
2309.10818
[ "2302.13971" ]
2309.10818#42
SlimPajama-DC: Understanding Data Combinations for LLM Training
[6] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. 10 19 [7] Together Computer. Redpajama: An open source recipe to reproduce llama train- ing dataset, 2023. 1, 3, 7, 11, 12, 17 [8] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. FlashAt- tention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022. 14 [9] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language mod- In International conference on machine eling with gated convolutional networks. learning, pages 933â 941. PMLR, 2017. 9 [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
2309.10818#41
2309.10818#43
2309.10818
[ "2302.13971" ]
2309.10818#43
SlimPajama-DC: Understanding Data Combinations for LLM Training
Bert: Pre- training of deep bidirectional transformers for language understanding, 2019. 18 [11] Nolan Dey, Gurpreet Gosal, Hemant Khachane, William Marshall, Ribhu Pathria, Marvin Tom, Joel Hestness, et al. Cerebras-gpt: Open compute-optimal language models trained on the cerebras wafer-scale cluster. arXiv preprint arXiv:2304.03208, 2023. 1, 9, 11
2309.10818#42
2309.10818#44
2309.10818
[ "2302.13971" ]
2309.10818#44
SlimPajama-DC: Understanding Data Combinations for LLM Training
[12] Nathan Habib Sheon Han Nathan Lambert Nazneen Rajani Omar Sanseviero Lewis Tunstall Thomas Wolf Edward Beeching, Cl´ementine Fourrier. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_ llm_leaderboard, 2023. 10, 11, 17 [13] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The arXiv preprint pile: An 800gb dataset of diverse text for language modeling. arXiv:2101.00027, 2020. 7, 17 [14] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Fos- ter, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, Sept. 2021. 10, 11, 17 [15] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith.
2309.10818#43
2309.10818#45
2309.10818
[ "2302.13971" ]
2309.10818#45
SlimPajama-DC: Understanding Data Combinations for LLM Training
Donâ t stop pretraining: Adapt language mod- els to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. 17 [16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understand- ing. In International Conference on Learning Representations, 2021. 10 [17] Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: clos- ing the generalization gap in large batch training of neural networks. Advances in neural information processing systems, 30, 2017. 15, 19
2309.10818#44
2309.10818#46
2309.10818
[ "2302.13971" ]
2309.10818#46
SlimPajama-DC: Understanding Data Combinations for LLM Training
[18] Angelos Katharopoulos and Franc¸ois Fleuret. Not all samples are created equal: In International conference on machine Deep learning with importance sampling. learning, pages 2525â 2534. PMLR, 2018. 17 [19] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. 15, 18, 19 [20] Teun Kloek and Herman K Van Dijk. Bayesian estimates of equation system pa- rameters: an application of integration by monte carlo. Econometrica: Journal of the Econometric Society, pages 1â
2309.10818#45
2309.10818#47
2309.10818
[ "2302.13971" ]
2309.10818#47
SlimPajama-DC: Understanding Data Combinations for LLM Training
19, 1978. 17 [21] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Mu Ë noz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code. Preprint, 2022. 14 20 [22] Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman.
2309.10818#46
2309.10818#48
2309.10818
[ "2302.13971" ]
2309.10818#48
SlimPajama-DC: Understanding Data Combinations for LLM Training
Mining of massive data sets. Cambridge university press, 2020. 7 [23] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how mod- els mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 3214â 3252, 2022. 10 [24] Zhuang Liu, Zhiqiu Xu, Joseph Jin, Zhiqiang Shen, and Trevor Darrell.
2309.10818#47
2309.10818#49
2309.10818
[ "2302.13971" ]
2309.10818#49
SlimPajama-DC: Understanding Data Combinations for LLM Training
Dropout reduces underfitting. In ICML, 2023. 15 [25] Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Dan S Weld. S2orc: The semantic scholar open research corpus. arXiv preprint arXiv:1911.02782, 2019. 14 [26] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 10 [27] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. 5, 7, 8 [28] Ofir Press, Noah A Smith, and Mike Lewis.
2309.10818#48
2309.10818#50
2309.10818
[ "2302.13971" ]
2309.10818#50
SlimPajama-DC: Understanding Data Combinations for LLM Training
Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409, 2021. 1, 9 [29] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. 18 Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 18 [31] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021. 7 [32] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. 1, 9 [33] Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Joel SlimPajama: A 627B token cleaned and dedu- https://www.cerebras.net/blog/ Jacob R Steeves, Hestness, and Nolan Dey. plicated version of RedPajama. slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama, 2023. 1, 17 [34] Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. Advances in Neural Information Processing Systems, 35:19523â 19536, 2022. 17, 18 [35] Philippe Tillet, Hsiang-Tsung Kung, and David Cox. Triton: an intermediate lan- guage and compiler for tiled neural network computations. In Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, pages 10â
2309.10818#49
2309.10818#51
2309.10818
[ "2302.13971" ]
2309.10818#51
SlimPajama-DC: Understanding Data Combinations for LLM Training
19, 2019. 14 [36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 3, 7, 15, 17 21 [37] https://github.com/mosaicml/llm-foundry. Llm foundry. Mosaicml, 2023. 15 Introducing mpt-7b: A new standard for open-source, commercially usable llms. Mosaicml blog, 2023. 3, 14, 15 [39] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V Le, Tengyu Ma, and Adams Wei Yu.
2309.10818#50
2309.10818#52
2309.10818
[ "2302.13971" ]
2309.10818#52
SlimPajama-DC: Understanding Data Combinations for LLM Training
Doremi: Optimizing data mixtures speeds up language model pretraining. arXiv preprint arXiv:2305.10429, 2023. 18 [40] Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy Liang. Data selection for language models via importance resampling. arXiv preprint arXiv:2302.03169, 2023. 17 [41] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi.
2309.10818#51
2309.10818#53
2309.10818
[ "2302.13971" ]
2309.10818#53
SlimPajama-DC: Understanding Data Combinations for LLM Training
Hel- laswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. 10 [42] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. 7 22 # Appendix # A Data Proportion Details
2309.10818#52
2309.10818#54
2309.10818
[ "2302.13971" ]
2309.10818#54
SlimPajama-DC: Understanding Data Combinations for LLM Training
Dataset Commoncrawl C4 GitHub Books ArXiv Wikipedia StackExchange Total Redpajama 72.6% 878B 14.4% 175B 59B 4.9% 26B 2.1% 28B 2.3% 24B 2.0% 20B 1.7% 100.0% 637B 100.0% 1.2T Slimpajama 52.2% 333B 26.7% 170B 33B 5.2% 27B 4.2% 29B 4.6% 24B 3.8% 21B 3.3% LLaMA 1 67.0% 670/938B 15.0% 150/210B 45/63B 4.5% 45/63B 4.5% 25/35B 2.5% 45/63B 4.5% 20/28B 2.0% 1.0/1.4T 100% Commoncrawl C4 GitHub Books Wikipedia WebText2 MassiveWeb News Total GPT3 0.0% 60.0% 180B 10.0% 0.0% 3.0% 0.0% 27.0% 16.0% 2.0% 3.0% 0.0% 22.0% 48.0% 0.0% 10.0% 0.0% 100.0% 600B 100.0% 300B 100.0% RefinedWeb 100% 600B 0B 0.0% 0B 0.0% 0B 0.0% 0B 0.0% 0B 0.0% 0B 0.0% 0B 0.0% 0 0 48B 9B 66B 0 0 MassiveText 0 30B 9B 81B 6B 0 144B 30B 300B Table 11: Detailed data source proportions for various datasets. # B MMLU
2309.10818#53
2309.10818#55
2309.10818
[ "2302.13971" ]
2309.10818#55
SlimPajama-DC: Understanding Data Combinations for LLM Training
In this section, we provide the detailed item-by-item results in MMLU, as shown in Table 12, it is interesting to notice that on some sub-domains in MMLU, the results from our configured 1.3B models are even better than GPT- 3 175B and LLaMA2 7B models. 23 GPT-3 Llama2 175B 7B SlimPajama-DC 1.3B DC-1 DC-2 DC-3 DC-4 DC-5 DC-6 Abstract Algebra Anatomy Astronomy Business Ethics Clinical Knowledge College Biology College Chemistry College Computer Science College Mathematics College Medicine College Physics Computer Security Conceptual Physics Econometrics Electrical Engineering Elementary Mathematics Formal Logic Global Facts High School Biology High School Chemistry High School Computer Science Humanities High School European History High School Geography Social Science High School Government And Politics Social Science High School Macroeconomics Social Science High School Mathematics High School Microeconomics High School Physics High School Psychology High School Statistics High School Us History High School World History Human Aging Human Sexuality International Law Jurisprudence Logical Fallacies Machine Learning Management Marketing Medical Genetics Miscellaneous Moral Disputes Moral Scenarios Nutrition Philosophy Prehistory Professional Accounting Professional Law Professional Medicine Professional Psychology Public Relations Security Studies Sociology Us Foreign Policy Virology World Religions STEM 30.0 STEM 48.0 STEM 49.0 46.0 Other 48.0 Other STEM 45.0 STEM 26.0 STEM 46.0 STEM 34.5 48.0 Other STEM 28.0 STEM 57.0 STEM 36.5 33.0 STEM 50.0 STEM 30.0 29.0 Humanities 37.0 Other STEM 48.0 STEM 33.0 STEM 39.0 54.0 58.0 58.0 40.5 STEM 28.0 42.0 STEM 28.0 61.0 STEM 30.5 53.0 56.0 50.0 54.0 55.5 55.0 48.0 STEM 31.0 56.0 Other 60.0 Other 40.0 Other 60.0 Other 44.5 Humanities 26.0 Humanities 47.0 Other 51.0 Humanities 53.0 Humanities 33.0 Other 34.5 Humanities 36.0 Other 44.5 Social Science 48.0 Social Science 52.0 Social Science 53.0 Social Science 69.0 Social Science 46.0 Other 55.0 Humanities Social Science Social Science Social Science Humanities Humanities Other Social Science Humanities Humanities Humanities 29.0 37.0 33.6 40.0 35.1 37.5 32.0 29.0 33.0 30.6 26.5 45.0 36.6 23.7 26.9 24.3 27.0 29.0 34.5 28.1 31.0 44.2 34.3 44.6 35.4 24.8 31.9 26.5 47.3 35.2 39.7 40.9 40.8 36.6 51.2 38.9 39.3 23.2 35.0 46.6 43.0 42.4 40.2 24.3 37.6 39.9 36.1 25.9 30.2 44.5 35.1 40.9 31.8 46.8 46.0 30.1 50.9 27.0 23.0 25.0 24.0 30.2 23.6 26.0 37.0 35.0 26.0 24.5 24.0 27.7 24.6 29.0 26.2 35.7 30.0 25.8 27.6 29.0 23.6 34.3 35.2 34.4 26.7 23.5 27.8 32.3 21.3 24.5 29.1 14.8 28.2 26.5 26.9 19.6 17.9 26.2 22.2 27.0 22.5 29.5 27.3 28.1 28.0 26.5 27.0 27.1 19.9 26.3 33.6 39.2 25.4 27.0 21.7 27.5 26.0 23.0 19.7 22.0 26.8 24.3 19.0 36.0 29.0 23.1 24.5 30.0 30.2 25.4 24.1 25.9 24.6 31.0 26.5 19.7 26.0 28.5 20.7 16.6 25.9 25.2 23.1 26.5 23.1 21.3 21.6 25.7 30.5 22.1 30.6 22.2 27.0 33.0 29.1 24.4 24.0 27.5 25.7 24.6 23.2 28.9 25.9 29.1 25.0 31.6 27.3 30.9 17.5 24.4 31.0 30.1 25.2 28.0 25.9 21.7 30.0 25.7 23.6 21.0 33.0 21.0 26.6 24.5 28.0 23.8 24.6 23.5 25.9 15.9 33.0 24.8 24.1 25.0 25.5 22.2 21.8 23.8 25.2 25.2 21.9 23.8 19.9 24.5 24.5 37.2 22.9 39.7 26.9 29.5 23.2 27.2 23.9 24.0 27.6 24.9 24.3 25.2 26.7 29.3 27.0 25.8 22.8 25.5 28.2 18.8 22.9 24.0 31.3 32.8 25.0 27.4 23.0 26.0 24.9 27.1 29.0 32.0 31.0 26.0 21.6 19.0 22.1 30.7 26.2 27.5 20.6 30.0 25.5 27.1 26.0 24.9 19.2 25.9 22.8 28.5 25.2 27.2 22.9 22.2 24.5 27.4 30.5 22.1 32.2 27.8 23.9 28.6 21.4 25.2 22.0 29.3 24.9 23.8 25.8 29.3 26.9 27.3 24.6 21.0 25.2 29.1 21.2 24.9 25.0 27.1 29.8 27.0 34.1 27.0 24.0 18.9 25.7 19.0 36.0 25.0 27.8 22.6 27.0 28.5 23.7 29.0 25.1 16.7 37.0 24.8 27.1 27.0 26.7 17.7 21.8 24.6 26.7 21.4 29.8 23.7 23.2 27.5 25.7 27.4 25.2 30.6 25.0 27.6 30.4 23.3 28.2 23.0 26.2 24.0 24.6 25.8 28.3 27.5 27.0 26.9 27.9 27.5 26.4 16.3 23.9 28.0 28.3 32.2 21.0 19.3 20.4 28.0 26.0 31.9 25.0 33.0 36.0 24.9 21.6 27.0 24.3 29.8 28.3 23.5 29.4 17.0 21.9 25.1 27.0 20.6 18.2 21.8 32.8 23.7 23.5 26.5 22.2 40.7 27.9 24.9 35.9 32.1 24.0 25.9 30.7 30.4 25.2 26.9 31.0 24.1 26.6 23.9 21.6 26.1 26.5 23.4 25.4 21.7 26.8 27.3 21.2 24.4 33.0 30.7 26.3
2309.10818#54
2309.10818#56
2309.10818
[ "2302.13971" ]
2309.10818#56
SlimPajama-DC: Understanding Data Combinations for LLM Training
# Humanities STEM Social Science Other 40.6 36.7 50.5 49.0 34.0 30.5 38.3 38.1 27.1 26.5 30.3 24.6 25.8 25.8 24.0 27.1 26.9 24.4 23.6 27.8 26.2 26.1 24.5 25.9 26.4 27.1 23.3 26.5 # All # All 43.9 35.1 27.0 25.7 25.6 25.7 26.0 24 Table 12: MMLU. 5-shot results per domain on the test sets. 26.0 26.7 26.1 25.9 26.2
2309.10818#55
2309.10818
[ "2302.13971" ]
2309.10691#0
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
3 2 0 2 t c O 2 1 ] L C . s c [ 2 v 1 9 6 0 1 . 9 0 3 2 : v i X r a Preprint. # MINT: EVALUATING LLMS IN MULTI-TURN INTER- ACTION WITH TOOLS AND LANGUAGE FEEDBACK @ Xingyao Wang1â , Zihan Wang1,2â â , Jiateng Liu1, Yangyi Chen1, Lifan Yuan1â , Hao Peng1, Heng Ji1 1 University of Illinois Urbana-Champaign, 2 Renmin University of China 1{xingyao6,zihanw,jiateng5,yangyic3,haopeng,hengji}@illinois.edu # ABSTRACT To solve complex tasks, large language models (LLMs) often require multiple rounds of interactions with the user, sometimes assisted by external tools. How- ever, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users. These oversights contribute to discrepancies between re- search benchmark evaluations and real-world use cases. We introduce MINT, a benchmark that evaluates LLMsâ ability to solve tasks with multi-turn interactions by (1) using tools and (2) leveraging natural language feedback. To ensure repro- ducibility, we provide an evaluation framework where LLMs can access tools by executing Python code and receive usersâ natural language feedback simulated by GPT-4. We repurpose a diverse set of established evaluation datasets focusing on reasoning, coding, and decision-making and carefully curate them into a compact subset for efficient evaluation. Our analysis of 20 open- and closed-source LLMs offers intriguing findings. (a) LLMs generally benefit from tools and language feedback, with performance gains (absolute, same below) of 1â 8% for each turn of tool use and 2â 17% with natural language feedback. (b) Better single-turn per- formance does not guarantee better multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised instruction-finetuning (SIFT) and reinforcement learning from human feedback (RLHF) generally hurt multi-turn capabilities.
2309.10691#1
2309.10691
[ "2308.12950" ]
2309.10691#1
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
We expect MINT can help measure progress and incentivize research in improving LLMsâ capabilities in multi-turn interactions, especially for open-source commu- nities where multi-turn human evaluation can be less accessible compared to com- mercial LLMs with a larger user base. 1 # INTRODUCTION To address complex tasks, a Large Language Model (LLM) often needs multiple rounds of inter- action with the user, sometimes aided by external tools (Schick et al., 2023b; ChatGPT Plugins; Mialon et al., 2023). LLMsâ performance during multiple turns of user-LLM exchanges is crucial in real-world applications: roughly 73% of Human-ChatGPT conversations contain more than one turn based on 94k entries of ShareGPT data (2023)2. Meanwhile, the ability to adapt to user-provided natural language feedback is also pivotal for their practical utility. However, current LLM evalua- tions predominantly focus on single-turn input-output (Hendrycks et al., 2020; Chen et al., 2021) and often overlook user-provided natural language feedback (Liu et al., 2023d; Deng et al., 2023b; Yang et al., 2023a; Shridhar et al., 2020), creating a discrepancy between real-world use cases and evaluation. Measuring how much LLMs can benefit from both tools and natural language feed- back during multi-turn interaction is essential to incentivize future research to improve LLMsâ capabilities in real-world scenarios. â Equal contribution. â Work done during internship at UIUC. 1Code is available on our project website: https://xingyaoww.github.io/mint-bench 2https://sharegpt.com/ 1 Preprint. To bridge these gaps, we introduce MINT. It is a benchmark for LLMs that measures their per- formance during multi-turn interaction, focusing on two particular capabilities (§2.1): (1) tool- augmented task-solving; (2) leveraging natural language feedback. MINT mirrors the real-world User-LLM-Tool collaborative problem-solving setting.
2309.10691#0
2309.10691#2
2309.10691
[ "2308.12950" ]
2309.10691#2
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
To solve a problem, the LLM can use exter- nal tools by generating and executing Python programs and/or collecting natural language feedback to refine its solutions; the feedback is provided by GPT-4 (OpenAI, 2023), aiming to simulate hu- man users in a reproducible and scalable way.3 For a comprehensive evaluation, we include eight established datasets spanning reasoning, code generation, and decision-making (§2.2). To facili- tate affordable multi-turn evaluation, after collecting 29,307 diverse instances from existing datasets (Tab. 1), we filter and sub-sample a compact dataset of 586 challenging and representative instances that require multi-turn interaction to solve. 4 We evaluate 4 closed- and 16 open-source LLMs with MINT. We measure LLMsâ tool-augmented task-solving capability by analyzing their performance from multi-turn tool use (§3.2). To assess the ability to leverage natural language feedback, we measure their performance upon natural language feedback by GPT-4 (§3.3). Our results show that: All models benefit from tool interaction and natural language feedback, with absolute performance gains by 1â 8% for each additional turn of tool use, and 2â 17% with natural language feedback. â ¢ Better single-turn performance does not necessarily lead to better multi-turn performance. For example, while Claude-2 outperforms its predecessor Claude-1 in single-turn evaluation, the latter benefit more from interaction and performs better with > 2 turns. There is a notable gap between open- and closed-source LLMs in multi-turn interaction per- formance. For example, with the help of language feedback, even the best open-source model, Lemur-70b-chat-v1, lags behind the best closed-source model by 8.7% in absolute success rate. â ¢ On most LLMs we evaluated, models trained with supervised instruction fine-tuning (SIFT, Wei et al., 2022) and reinforcement learning from human feedback (RLHF, Ouyang et al., 2022a) per- form worse in multi-turn settings regardless of the presence of language feedback. For example, SIFT hurts Codellama-34Bâ
2309.10691#1
2309.10691#3
2309.10691
[ "2308.12950" ]
2309.10691#3
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
s multi-turn performance by 11.1% and 15.4% (w/ feedback), and RLHF negatively affects LLaMA-2-70B by 8.5% and 8.7%, respectively. Notable exceptions are Vicuna-7B and Lemur-70b-chat-v1, where SIFT improves multi-turn interaction. By fixing the LLM to evaluate and changing the feedback-provider LLM, MINT can measure dif- ferent LLMsâ capabilities in providing useful feedback (§3.4); We find that feedback-providing ability could be orthogonal to task-solving ability: despite performing the worst in task-solving, CodeLLaMA-34B-Instruct can provide feedback to improve the stronger GPT-3.5. Additionally, MINTâ s challenging evaluation reveals undesired artifacts in ShareGPT data (2023), a widely used dataset for instruction tuning (§3.5). Furthermore, we show that GPT4-simulated language feedback is as helpful as human-written feedback based on human evaluation and task performance (§3.6). We expect that MINT can help track progress and incentivize future research in improving LLMâ s multi-turn task-solving and/or feedback-providing capabilities, especially for open-source commu- nities where human evaluation can be less accessible than commercial LLMs with a large user base. # 2 MINT In this section, we discuss (1) how to evaluate multi-turn interaction (§2.1) with tool use and lan- guage feedback under different settings; (2) how we repurpose existing datasets for MINT evaluation (§2.2). We use Fig. 1 as a running example. INTERACTION FRAMEWORK MINT aims to simulate real-world applications of LLMs, emphasizing user-LLM and LLM-tool interaction. In a user-LLM collaborative problem-solving process, a human user provides initial instruction and aims to obtain a satisfactory solution with little effort in helping the LLM.
2309.10691#2
2309.10691#4
2309.10691
[ "2308.12950" ]
2309.10691#4
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
On the 3We use gpt-4-0613 version in this work. 4Evaluating an LLM using MINT costs around 100 USD (â 3M prompt tokens and â 100K completion tokens) with feedback from gpt-4-0613 ($0.03/1K prompt tokens and $0.06/1K completion tokens), roughly 7% of the cost compared to hiring real-human annotators (§3.6). 2 # Preprint. ° , © User starts interaction User (optional, LLM-simulated) Interaction Turn #1 !
2309.10691#3
2309.10691#5
2309.10691
[ "2308.12950" ]
2309.10691#5
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
This is . [..] Now you can use this information to find out how! many insects Lily found. Q user Task: Together Lily, David, and Bodhi collected 43 insects. Lily found 7 more than David. David found half of what Bodhi found. How many insects did Lily find? Solution output format: an integer. Thought: Now that we know that Bodhi found 18 insects [...] Lily found 7 more than David, so Lily found 18 + 7 insects. LLM Execute: lily_insects = solution[0] + 7 Lily_insects Thought: To solve this problem, let's start by setting up equations [...] assume that David found x insects. Based on the given information, Interaction Turn #4 Lily found 7 more than David, so Lily found x + 7 insects [...] Execute: from sympy import symbols, Eq, solve & Python Interpreter x, b = symbols('x b') Observation: Out[3}: 25.0000000000000 # set up the equation You have 2 steps left and 2 chances to propose solution left. equation = Eq(x + (x + 7) + (1/2) * b, 43) # solve the equation solution = solve(equation, x) !
2309.10691#4
2309.10691#6
2309.10691
[ "2308.12950" ]
2309.10691#6
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
This is NOT GOOD. You made a mistake in calculating the number of solution[0] : insects Lily found. You should have calculated the number of insects David found first, which is half of what Bodhi found User (optional, LLM-simulated) Interaction Turn #2 i & Python Interpreter _â , Thought: [Apologize ...] Bodhi found 18 insects [...] David found half of Grrcerceiaie Ci p Te) â Oz what Bodhi found, so David found (1/2) * 18 insects. Lily found 7 more You have 4 steps left and 2 chances to propose solution left. than David, so Lily found (1/2) "18 +7 insects 2. User (optional, LLM-simulated) Execute: david_insects = (1/2) * solution[o] lily_insects = david_insects + 7 Lily_insects Interaction Turn #5 Thought: [Apologize ...] Bodhi found b insects. David found half of what ® Python Interpreter Bodhi found, so David found (1/2) * b insects [...] Observation: Out[4]: 16.0000000000000 Execute: from sympy import symbols, Eq, solve You have 1 steps left and 2 chances to propose solution left. b = symbols('b') You should take the last step to propose a solution. # set up the equation equation = Eq((1/2) * b + 7 + (1/2) * b + b, 43) User (optional, LLM-simulated) # solve the equation â
2309.10691#5
2309.10691#7
2309.10691
[ "2308.12950" ]
2309.10691#7
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
This is GOOD. You have correctly calculated the number of insects Lily? solution = solve(equation, b) i Now you can propose the solution. solution[o] Interaction Turn #3 Propose Solution: The number of insects Lily found is 16. @® python interpreter ° : . Q User Observation: Out[2]: 18.0000000000000 You have 3 steps left and 2 chances to propose solution left. Good job! You have successfully solved the task! [stops interaction] Figure 1: An interaction trajectory produced by evaluating gpt-3.5-turbo-0613 with MINT on a mathematical reasoning task. The evaluated modelâ s outputs are in the blue boxes, and the feedback by gpt-4-0613 in red, dotted ones. Some details are omitted for clarity. other hand, augmenting LLMsâ with tools can effectively improve LLMsâ task-solving capabilities (Mialon et al., 2023), suggesting the importance of LLM-Tool interaction. We instruct the LLM (§F.4.1) to perform the following steps in each turn: (1) optionally express its reasoning process (â Thought:â in Fig. 1, similar to Yao et al. (2022)); (2) then either interact with tools by generating Python code and executing it through a Python interpreter (â Execute:â in Fig. 1), or proposing a solution to the user (â Propose Solution:â in Fig. 1). In our implementation, the model is instructed to wrap their â Executeâ and â Propose Solutionâ actions with pairs of <execute> and <solution> tags for ease of parsing. We standardize the prompts and in-context examples for different LLM variants (base vs. chat) and for task-solving and feedback providing, aiming for fair and reproducible comparisons (Appendix §F.4.1, §F.4.2, and §F.5). In what follows, we introduce three settings with increased interaction complexity to measure different aspects of multi-turn interaction. Lazy User-LLM Interaction. We consider the scenario where a user provides an initial instruction and makes minimal efforts to guide the LLM towards the final solution. This will serve as a baseline for subsequent evaluations of LLMâ s ability in tool-augmented task-solving and leveraging natural language feedback.
2309.10691#6
2309.10691#8
2309.10691
[ "2308.12950" ]
2309.10691#8
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
The LLM is given two attempts to propose solutions for each problem, with a limit on the number of interaction turns k (§3.1). Upon a proposed solution, MINT simulates users that check the solutionâ s correctness with ground truths. When the first attempt is wrong, the user responds to the LLM with â Your answer is wrong.â The interaction ends either after the LLM has made two attempts to propose a solution, or when the solution is verified as correct (5th turn of Fig. 1), or when the k-th turn of interaction is reached. We consider this as the case of Lazy User- LLM Interaction since the simulated user provides at most one additional binary feedback during the entire course of interaction. We follow standard evaluation practice and use established evaluation metrics for each task in §2.2. LLM-Tool Interaction with Lazy User-LLM Interaction. Under the lazy User-LLM interaction setting, we measure the LLMâ s ability to solve tasks using tools by comparing their task-solving success rate across different interaction limits k. For each turn, the LLM can choose to interact with
2309.10691#7
2309.10691#9
2309.10691
[ "2308.12950" ]
2309.10691#9
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
3 Preprint. Table 1: Dataset statistics of re-purposed data instances from existing datasets into MINT. We filter and down-sample existing datasets to construct a compact set of complex tasks that require multi- turn interaction to solve (§2.2). Task Type Task Name Original Size Reduced Size in MINT Code Generation HumanEval (Chen et al., 2021) MBPP (Austin et al., 2021) 164 500 Decision Making ALFWorld (Shridhar et al., 2020) 134 Reasoning GSM8K (Cobbe et al., 2021) HotpotQA (Yang et al., 2018) MATH (Hendrycks et al., 2021) MMLU (Hendrycks et al., 2020) TheoremQA (Chen et al., 2023) 1319 7,405 5,000 13,985 800 Total 29,307 45 91 134 48 43 100 76 49 586 tools (generate code to call equation-solver in Fig. 1) or propose a solution (5th turn in Fig. 1). To keep the LLM from getting stuck in an infinite loop of tool-calling without proposing a solution, MINT reminds the LLM: â You have X steps left and Y chances to propose solution left,â and pro- vides an additional instruction at the last turn: â You should take the last step to propose a solution.â
2309.10691#8
2309.10691#10
2309.10691
[ "2308.12950" ]
2309.10691#10
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Intuitively, with more interaction with tools, the LLM can get more useful observations through the Python interpreter (e.g., calculation results, error messages). We vary k â {1, 2, 3, 4, 5} and com- pare the modelsâ success rate with each k. We consider LLMâ s performance gain w.r.t. k and the absolute performance at k = 5 as their tool-augmented task-solving ability (§3.2). Informative User-LLM Interaction with Language Feedback. Beyond lazy User-LLM interac- tion, we investigate how the LLM performs when the user mirrors a patient teacher who provides useful suggestions. However, collecting human language feedback for LLM evaluation presents reproducibility challenges due to inconsistent standards and can be costly, particularly for open- source communities with relatively fewer resources5. To address these issues, we prompt GPT-4 (§F.4.2) to simulate user language feedback (dotted boxes in Fig. 1). We validate the effectiveness of GPT-4 feedback in a human evaluation (§3.6). We compare the performance between (1) simu- lated language feedback and (2) lazy user-LLM interaction, both in the setting of tool-augmented interaction with an interaction limit k = 5. We consider performance (absolute) and improvements from language feedback as LLMâ s ability to leverage natural language feedback.
2309.10691#9
2309.10691#11
2309.10691
[ "2308.12950" ]
2309.10691#11
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
2.2 REPURPOSING EXISTING DATASET FOR MINT Evaluating LLMs in multi-turn interaction can be costly due to the need for iterative inference. For instance, HotpotQA (Yang et al., 2018) has 7,405 test examples. Evaluation with five turns requires at least 7,405 à 5 = 37K LLM inference runs. Previous methods (Yao et al., 2022; Shinn et al., 2023) choose to evaluate on randomly drawn test examples, hindering fair performance comparisons. We select diverse tasks from established datasets that requires multi-turn interaction to solve while also maintaining the selected subset compact for accessible evaluation. The following paragraph describes our three-step approach to repurposing datasets for MINT. We provide dataset sources and statistics in Tab. 1. For more details, please refer to §D in Appendix. Collecting and Re-purposing Data from Diverse Sources. Our primary goal is to create a com- prehensive evaluation covering tasks that benefit from interaction. We choose three types of tasks: including math reasoning (GSM8K, MATH, TheoremQA), multi-hop question answering (HotpotQA), and knowledge problem-solving (MMLU). We implicitly filter out knowledge-intensive questions that do not require multi-step reasoning in the next step. Code generation, including HumanEval and MBPP. â ¢ Decision-making tasks in ALFWorld, an embodied household simulator with a text-only interface based on TextWorld (CË ot´e et al., 2018). 5Based on our human evaluation (§3.6, §B), we estimate annotators, on average, take 96 seconds to provide language feedback for one turn, which translates to 90 USD per 100 feedback with hourly wage of US workers. 4 Preprint. From eight datasets, we create an initial test set of 29,307 instances. All instances are initially designed for single-round evaluation without interaction, except for decision-making (ALFWorld). Similarly to Yao et al. (2022); Gao et al. (2023), we adapt reasoning tasks into multi-turn interaction tasks by augmented LLM with tools for problem-solving (§F.5.3).
2309.10691#10
2309.10691#12
2309.10691
[ "2308.12950" ]
2309.10691#12
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Through in-context prompting (§F.5.2), we encourage LLMs to use the Python interpreter to test their generated code on the pro- vided public test suite for code generation problems before committing to a solution. Keeping Instances that Require Multi-turn Interaction. To better answer our research question â how LLM benefits from multi-turn interaction,â we only keep instances that are challenging and require multi-turn interaction. Since we allow LLM to propose solutions more than once, we filter out instances that a random guess baseline can do well, e.g., multiple-choice instances with < 4 options. We then run gpt-3.5-turbo-0613 (OpenAI API) on the initial dataset and exclude instances finished within two turns (e.g., easy problems that can be solved without multi-turn). Stratified Sub-Sampling for Efficient Evaluation. We use stratified sampling to create a compact and representative set of 586 examples, ensuring that the ratio of correct to incorrect examples in the resulting set mirrors that of the original data to balance the difficulty of the resulting samples. # 3 EXPERIMENTS 3.1 SETUP
2309.10691#11
2309.10691#13
2309.10691
[ "2308.12950" ]
2309.10691#13
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Evaluated LLMs. To comprehensively measure multi-turn interaction capability and identify the potential gap between open- and closed-source LLMs, we evaluate 4 closed- and 16 open-source LLMs. We cover different sizes and training techniques to better understand how they affect LLMsâ multi-turn interaction capability. Training techniques lead to three model variants: pre- trained (base) models, supervised instruction fine-tuned (SIFT, Wei et al., 2022) models, and mod- els trained with reinforcement learning from human feedback (RLHF, Ouyang et al., 2022a). For closed-source models, we evaluate popular commercial LLMs, including gpt-3.5-turbo-0613 from OpenAI API; claude-instant-1, claude-2 from Anthropic Claude API6; Bard chat-bison-001 from Bard API. For open-source LLMs, we evaluate the LLaMA-2 model family (7B, 13B, 70B) (Touvron et al., 2023), including base and chat (RLHF); Vicuna-v1.5 (7B, 13B) (Zheng et al., 2023), a SIFT model fine-tuned on multi-turn conversations based on LLaMA-2-base; the CodeLLaMA model family (7B, 13B, 34B) (Rozi`ere et al., 2023) that pre- train LLaMA-2-base on code, including base and instruct (SIFT); Lemur-v1-70B (Xu et al., 2023) pre-train LLaMA-2 on code intensive data, including base and chat (SIFT).
2309.10691#12
2309.10691#14
2309.10691
[ "2308.12950" ]
2309.10691#14
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Metric. We consider Success Rate SR as our evaluation metric, which measures the percentage of successful task instances. For interaction limit k, we start from scratch and allow each LLM to interact up to the k-th turn and measure their corresponding SRk. Unless otherwise noted, we limit k â [1, 5] where k = 1 means no interaction and k = 5 maximizes interaction turns within most modern LLMsâ context window (4,096 tokens). 3.2 MEASURING LLMâ S TOOL-AUGMENTED TASK-SOLVING IN MULTI-TURN INTERACTION We ask LLMs to solve tasks (§2.2) with different interaction limits k â {1, 2, 3, 4, 5} without natural language feedback (Fig. 1 without red dotted box), and quantify LLMsâ tool-augmented task-solving capability by (1) absolute performance SR5 and (2) improvement per additional interaction turn k(b · k + a â SRk)2 (Tab. 2). â tools estimated as the slope b from least-square regression minb,a Since the underlying SRk vs. k relationship might not be linear, we only use the regression coef- ficient (with R2) as a rough estimate of the improvement rate to complement the absolute success rate SR5 for a more comprehensive understanding of the modelsâ capabilities. Overall Observations. In Fig. 2, we find all open-source models fall behind best commercial closed-source models in both SR5 and â tools, with claude-2 and claude-instant-1 sur- passing all open-source LLMs in â tools with high R2, suggesting near-linear improvement. Notably, despite performing badly at k = 1, claude-instant-1 surpasses claude-2 as k increases 6According to https://docs.anthropic.com/claude/reference/selecting-a-model, we use version v1.2 for claude-instant-1 and v2.0 for claude-2.
2309.10691#13
2309.10691#15
2309.10691
[ "2308.12950" ]
2309.10691#15
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
5 Preprint. Table 2: Tool-augmented task-solving success rate with different interaction limit k (i.e., max num- ber of interaction turns allowed) and improvement rate (estimated with least-square regression coef- ficient, regression R2 is also included). The slope (i.e., coefficient) indicates the rate of improvement while R2 denotes the goodness of fit of the regression model to the data. Models Size Type SR (Micro-averaged across tasks) k = 3 k = 1 k = 2 k = 4 k = 5 Improvement Rate R2 Slope Open-source LLM 7B Baseâ SIFT 0.3 0.3 4.1 7.8 7.2 10.2 7.2 9.7 4.3 +1.1 8.7 +1.9 0.38 0.53 CodeLLaMA 13B Base SIFTâ 0.5 1.5 13.7 12.6 17.9 13.1 19.3 15.0 18.4 +4.1 14.5 +2.8 0.70 0.64 34B Base SIFTâ
2309.10691#14
2309.10691#16
2309.10691
[ "2308.12950" ]
2309.10691#16
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
â 0.2 2.6 16.2 10.1 23.0 14.7 25.9 15.4 28.2 +6.6 17.1 +3.4 0.85 0.86 7B Base RLHFâ 0.2 1.0 5.6 4.3 7.3 6.7 8.9 6.5 9.7 +2.2 7.3 +1.5 0.87 0.83 LLaMA-2 13B Base RLHF 0.2 4.1 11.4 12.5 15.5 12.5 15.2 13.3 14.5 +3.2 11.9 +1.7 0.63 0.47 70B Base RLHF 1.9 4.3 19.4 14.3 24.6 15.7 26.4 16.6 26.4 +5.6 17.9 +3.0 0.73 0.73 Lemur-v1 Vicuna-v1.5 Base SIFT SIFTâ 7B 13B SIFTâ 70B 1.0 3.8 0.0 0.0 17.9 27.0 6.7 2.2 23.6 35.7 12.3 4.4 25.3 37.5 15.4 6.7 26.3 +5.8 37.0 +7.7 12.6 +3.4 8.4 +2.1 0.77 0.73 0.77 1.00 Closed-source LLM chat-bison-001 - claude-2 - claude-instant-1 - gpt-3.5-turbo-0613 - gpt-4-0613 - -â
2309.10691#15
2309.10691#17
2309.10691
[ "2308.12950" ]
2309.10691#17
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
- - - - 0.3 26.4 12.1 2.7 - 15.9 35.5 32.2 16.9 - 14.2 36.0 39.2 24.1 - 13.0 39.8 44.4 31.7 - 14.5 +2.5 39.9 +3.1 45.9 +8.0 36.2 +8.2 69.5 - 0.40 0.81 0.84 0.96 - * Evaluated LLM failed to produce parsable output as instructed in some cases. See §3.5 and Tab. A.7 for details. â
2309.10691#16
2309.10691#18
2309.10691
[ "2308.12950" ]
2309.10691#18
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
We identified potential undesired artifacts in its training data, which hurt its performance. See §3.5 for details. to 3, eventually achieving a higher SR5 (45.9% vs. 39.9%), suggesting claude-instant-1â s superior ability to improve with multi-turn interaction. Absolute performance and improvement-per-turn scale with model size. For open-source CodeLLaMA and LLaMA-2, we observe a trend on all variants (Base, SIFT, and RLHF) that â tools and SR5 increase when scaling up LLMs. As we discuss in §3.5, Vicuna-v1.5 models are an exception, potentially due to their training artifacts that hurt task performance. SIFT on multi-turn data could be helpful. Despite the issue above, Vicuna-v1.5 (7B, SIFT) does show stronger performance compared to LLaMA-2 (Base and RLHF, 7B) in â tools (+3.4% vs. +2.2% / +1.5%) and 9.7% / 7.3%). Lemur-v1 (70B, SR5 (12.6% vs. SIFT) also shows stronger performance than its Base vari- ant. However, except CodeLLaMA (7B), we do not find similar improvements on CodeLLaMA (SIFT). We hy- pothesize that the performance gain on Vicuna-v1.5 and Lemur-v1 could be attributed to fine-tuning on ShareGPTâ s multi-turn human-ChatGPT conversations.
2309.10691#17
2309.10691#19
2309.10691
[ "2308.12950" ]
2309.10691#19
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
70 â â : 30 aaa : rs - â â rd Bo Max Number of Interaction Tunas k ° LLaMA2 (708, Base) faude-instant-l (closed-source) e- LLaMA2 (708, RLHF) ~~ gpt-3.5-turbo.0613 (closed-source) â _â â * _â _â _â â _ Success Rate, micro-averaged (%) Figure 2: With an increasing interaction RLHF could hurt LLM-tool multi-turn interaction. We find that on LLaMA-2 series, RLHF alignment gen- erally hurts modelsâ performance in both â tools (â 0.7% to â 2.6%) and SR5 (â 2.4% to â 8.5%), similar to the prior observation that alignment can degrade task perfor- mance (Ouyang et al., 2022b). However, itâ s hard to conclude that RLHF in general hurts model performance. We leave it for future work to explore the role of RLHF in multi-turn interaction.
2309.10691#18
2309.10691#20
2309.10691
[ "2308.12950" ]
2309.10691#20
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
6 Preprint. 3.3 MEASURING LLMâ S ABILITY TO LEVERAGE NATURAL LANGUAGE FEEDBACK On top of LLM-tool interaction, we use gpt-4-0613 to simulate user feedback for evaluated LLMs (Fig. 1 with red dotted box). With a k = 5 interaction limit, we measure the LLMâ s ability to leverage natural language feedback using the absolute performance SRfeedback and the performance difference after feedback is given: â feedback = SRfeedback Overall Observations. We find no significant difference between open- and closed-source mod- els in terms of â feedback. Open-source models obtain +1.7 â +17.2% from feedback, while closed-source models obtain +6.5 â +15.2%. However, there is still a gap between them in ab- solute success rate SRfeedback , as the best open-source model Lemur-v1 (70B, SIFT) still lags behind the best closed-source model claude-instant-1 by 8.7%. Surprisingly, we find that CodeLLaMA-34B-base can achieve comparable performance to GPT-4 on decision-making tasks with language feedback from it, showing its strong ability to leverage language feedback. The effect of SIFT and RLHF. Similar to §3.2, we find that SIFT and RLHF hurt modelsâ ability to leverage feedback. The results on CodeLLaMA (except 7B) and LLaMA-2 show that SIFT/RLHF models all have lower â feedback and SRfeedback than their base variants. Another two exceptions are Vicuna-v1.5 (7B) and Lemur-v1 (70B). We speculate using multi-turn conversations (ShareGPT) for SIFT contributes to these two exceptions. # 3.4 MEASURING THE EFFICACY OF DIFFERENT LLMâ S ABILITY TO PROVIDE FEEDBACK Fixing the evaluated model to be gpt-3.5-turbo-0613, we assess seven LLMsâ feedback- providing capability through â feedback (Tab. 4). Our main finding is that task-solving ability could be orthogonal to feedback-providing ability:
2309.10691#19
2309.10691#21
2309.10691
[ "2308.12950" ]
2309.10691#21
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
LLMâ s higher task-solving performance does not guar- antee better feedback-providing capability and vice versa. For example, although GPT-3.5 (16k) performs well in task-solving (SR5 ranked 3rd in Tab. 4), it leads to a performance degrada- tion of â 10.4% in GPT-3.5; Similarly, GPT-4 with self-feedback in Tab. 3 also experiences de- graded performance. On the other hand, despite performing the worst in solving tasks in Tab. 4, CodeLLaMA-34B-Instruct can provide feedback that improves the stronger GPT-3.5. 3.5 MINT CAN HELP DETECT FAILURE PATTERNS OF EVALUATED LLMS Surprisingly, beyond evaluating LLMsâ multi-turn interaction ability, we find that complex multi- turn tasks (e.g., Fig. 1) in MINT can also act as a â test suiteâ to test an LLM for unexpected behavior. We find two main categories of anomalies: (1) inability to follow formatting instructions and (2) producing unexpected outputs likely due to artifacts. Inability to Follow Formatting Instructions. We find that some models (e.g., smaller CodeLLaMA and LLaMA, chat-bison-001) have trouble producing a parsable format as in- structed, hindering task-solving (statistics can be found in Tab. A.7). Unexpected Output Likely Due to Data Artifact. We find that Vicuna models (SIFT on ShareGPT data) generate escaped underscore (â \ â ) instead of underscore (â â ) across all tasks, causing syn- tax errors when executing code and reducing performance. We examine ShareGPT data (2023) and find at least one escaped underscore (â \ â ) artifact on 15% examples, suggesting artifacts in training data could cause this issue. We observe a similar issue with CodeLLaMA-Instruct: We find that CodeLLaMA-Instruct (34B) always ignores user-given instructions on the code generation tasks â wrap your code with <execute> tagâ and uses [PYTHON] to wrap the code (happens on 100% of code generation tasks, 0% on other tasks).
2309.10691#20
2309.10691#22
2309.10691
[ "2308.12950" ]
2309.10691#22
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Touvron et al. (2023) uses [PYTHON] as the tag to generate self-instruct data on code problems for SIFT. We suspect CodeLLaMA-Instruct models are trained and overfitted to [PYTHON] token, causing them to produce [PYTHON] regard- less of user instruction. We refer to §E.1 and §E.2 for examples and quantitative results. 3.6 CAN GPT-4 GENERATE HUMAN-LEVEL NATURAL LANGUAGE FEEDBACK? We perform a human evaluation quantitatively comparing the feedback generated by GPT-4 and written by humans.
2309.10691#21
2309.10691#23
2309.10691
[ "2308.12950" ]
2309.10691#23
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Details can be found in Appendix §B. In Tab. 5, human annotators consider 91.2% of GPT-4 generated language feedback to be as helpful as, if not better than, human written 7 Preprint. Table 3: LLMâ s ability to leverage natural language feedback, measured by â feedback between modelsâ performance with and without feedback produced by gpt-4-0613. All models are eval- uated with an interaction turn limit of k = 5. For both open- and closed-source LLMs, the best performance is bolded, and the second-best performance is underlined. Open-source LLM 7B Base no feedback w/ GPT-4 feedback â feedback, gpt-4 â 0.0 4.8 +4.8 18.7 59.7 +41.0 â 0.0 0.0 +0.0 4.3 16.2 +11.9 SIFT no feedback w/ GPT-4 feedback â feedback, gpt-4 7.9 17.1 +9.2 17.2 62.7 +45.5 2.2 10.3 +8.1 8.7 25.9 +17.2 CodeLLaMA 13B Base SIFT no feedback w/ GPT-4 feedback â feedback, gpt-4 no feedback w/ GPT-4 feedback â feedback, gpt-4 8.5 15.8 +7.3 4.8 10.1 +5.4 4.4 27.9 +17.9 +23.5 â 2.2 50.0 59.0 14.7 +9.0 +12.5 56.0 73.9 18.4 31.9 +13.5 14.5 22.4 +7.8 34B Base SIFT no feedback w/ GPT-4 feedback â feedback, gpt-4 no feedback w/ GPT-4 feedback â feedback, gpt-4 17.4 30.4 +13.0 14.9 20.2 +5.4 18.4 30.1 +20.9 +11.8 â
2309.10691#22
2309.10691#24
2309.10691
[ "2308.12950" ]
2309.10691#24
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
â 2.2 3.7 +1.5 63.4 84.3 37.3 67.9 +30.6 28.2 42.7 +14.5 17.1 27.3 +10.2 7B Base RLHF no feedback w/ GPT-4 feedback â feedback, gpt-4 no feedback w/ GPT-4 feedback â feedback, gpt-4 2.9 4.1 +1.3 13.6 14.6 +1.0 35.8 46.3 +10.5 â 0.0 2.2 +2.2 0.0 8.1 +8.1 0.0 2.9 +2.9 9.7 14.7 +4.9 7.3 9.0 +1.7 LLaMA-2 13B Base no feedback w/ GPT-4 feedback â feedback, gpt-4 3.5 10.8 +7.3 5.2 15.4 +10.5 +10.3 50.0 60.5 14.5 23.2 +8.7 RLHF no feedback w/ GPT-4 feedback â feedback, gpt-4 19.6 24.1 +4.4 3.7 9.7 +6.0 2.2 10.3 +8.1 11.9 17.6 +5.6 70B Base no feedback w/ GPT-4 feedback â feedback, gpt-4 18.7 22.5 +3.8 12.5 27.9 +14.2 +15.4 59.0 73.1 26.4 35.3 +8.9 RLHF no feedback w/ GPT-4 feedback â feedback, gpt-4 20.2 23.1 +2.9 8.8 19.9 +20.1 +11.0 21.6 41.8 17.9 26.6 +8.7 Lemur-v1 70B Base no feedback w/ GPT-4 feedback â
2309.10691#23
2309.10691#25
2309.10691
[ "2308.12950" ]
2309.10691#25
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
feedback, gpt-4 16.1 20.9 +4.8 15.4 61.2 70.2 27.9 +9.0 +12.5 26.3 33.8 +7.5 Vicuna-v1.5 SIFT 7B SIFT 13B SIFT no feedback w/ GPT-4 feedback â feedback, gpt-4 no feedback w/ GPT-4 feedback â feedback, gpt-4 no feedback w/ GPT-4 feedback â feedback, gpt-4 31.6 32.6 +0.9 â 10.1 9.8 â 0.3 â 11.1 16.5 +5.4 27.2 59.7 68.7 44.9 +9.0 +17.6 â 2.2 6.6 +4.4 â 2.2 1.5 â 0.7 29.1 64.9 +35.8 â 8.2 5.2 â 3.0 37.0 43.7 +6.7 12.6 21.7 +9.0 8.4 10.4 +2.1 Closed-source LLM chat-bison-001 - - no feedback w/ GPT-4 feedback â feedback, gpt-4 â 14.2 25.0 +10.8 29.9 47.0 +17.2 â 0.0 6.6 +6.6 14.5 25.8 +11.3 claude-2 - - no feedback w/ GPT-4 feedback â feedback, gpt-4 52.2 55.1 +2.8 36.8 47.1 +26.9 +10.3 14.2 41.0 39.9 50.0 +10.1 claude-instant-1 - - no feedback w/ GPT-4 feedback â
2309.10691#24
2309.10691#26
2309.10691
[ "2308.12950" ]
2309.10691#26
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
feedback, gpt-4 50.0 54.4 +4.4 35.3 47.0 53.0 47.1 +6.0 +11.8 45.9 52.4 +6.5 gpt-3.5-turbo-0613 - - no feedback w/ GPT-4 feedback â feedback, gpt-4 36.7 50.3 +13.6 41.8 66.4 +24.6 29.4 39.0 +9.6 36.2 51.4 +15.2 gpt-4-0613 - - no feedback w/ GPT-4 feedback â feedback, gpt-4 67.4 67.1 â 0.3 84.3 85.1 +0.7 59.6 56.6 â 2.9 69.5 68.8 â 0.7 * Evaluated LLM failed to produce parsable output as instructed in some cases (§2.1). See §3.5 and Tab. A.7 for details. â
2309.10691#25
2309.10691#27
2309.10691
[ "2308.12950" ]
2309.10691#27
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
We identified potential undesired artifacts in its training data, which hurt its performance. See §3.5 for details. 8 # Preprint. Table 4: LLMsâ ability to provide feedback, mea- sured by â feedback with a fixed evaluated LLM (GPT-3.5). We also report SR5 differences be- tween the feedback-provider and evaluated LLM. _ Table 5: Human Evaluation of GPT-4 Gen- erated Feedback against human written feed- back, measuring helpfulness and human-like. Feedback-provider LLM gpt-4-0613 claude-instant-1 gpt-3.5-turbo-16k-0613 CodeLlama-34b (Base) Llama-2-70b (Base) Llama-2-70b-chat (RLHF) CodeLlama-34b-Instruct (SIFT) SR5 Difference â feedback +15.2 +33.3 +1.5 +9.7 â 10.4 +4.1 +2.4 â 8.0 â 0.5 â 9.7 â 14.0 â 18.3 +3.2 â 19.1 Percentage (%) Which feedback is more Helpful Human-Like Both are equally GPT-4 feedback Human feedback 36.3 54.9 8.8 69.9 22.1 8.0 feedback.
2309.10691#26
2309.10691#28
2309.10691
[ "2308.12950" ]
2309.10691#28
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Itâ s also hard for humans to distinguish GPT-4 generated feedback from human feedback (human-like) in 92% of the cases. We also compare GPT-4 generated and human-written feedback by asking gpt-3.5-turbo-0613 to continue problem-solving with either a turn of (1) human language feedback or (2) GPT-4 feedback. Results show that human feedback and GPT-4 feedback lead to similar model performance SRfeedback # 4 RELATED WORK 4.1 LLM IN INTERACTION Interact with Users. LLMs have demonstrated extensive potential in seamless interaction with human users and in assimilating real-time human feedback during inference processes (Fernandes et al., 2023). According to recent studies, this collaborative synergy between humans and LLMs has been explored across various domains and applications, including sentences editing (Reid & Neubig, 2022; Schick et al., 2023c), code generation (Nijkamp et al., 2023), iterative output refine- ment (Saunders et al., 2022), and creative writing (Lee et al., 2022a; Shu et al., 2023; Wang et al., 2023b), generative information-seeking (Kamalloo et al., 2023), and even theorem proving (Yang et al., 2023b). The partnership between users and LLMs continues to redefine possibilities across diverse research areas, signaling promising advancements in the near future. Interact with Tools. Engaging with external tools allows LLMs can lead to more accurate and reliable outputs (Peng et al., 2023; Gou et al., 2023; Qin et al., 2023a). LLMs can be connected with real-world Application Programming Interfaces (APIs), enabling them to actively engage with diverse external tools (Qin et al., 2023b; Parisi et al., 2022; Schick et al., 2023a; Tang et al., 2023; Patil et al., 2023; Song et al., 2023; Hao et al., 2023).
2309.10691#27
2309.10691#29
2309.10691
[ "2308.12950" ]
2309.10691#29
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
For example, LLMs can connect with (1) the Internet to obtain latest information (Nakano et al., 2021; Shuster et al., 2022; Paranjape et al., 2023; Liu et al., 2023b); (2) the program interpreter to run the generated code (Chen et al., 2022; Gao et al., 2023; Drori et al., 2022; Pan et al., 2023; Wang et al., 2023a); (3) multimodal perceiver to obtain the information beyond the language modality (Huang et al., 2023a; Lu et al., 2023); (4) physical simulator to better understand the physical law (Liu et al., 2023a). 4.2 EVALUATING INTERACTION Existing work on interaction evaluation mostly focuses on a specific task or dimension, like task completion (Liu et al., 2023c), code generation (Yang et al., 2023a), human-LLM collaborative task solving (Lee et al., 2022b; Huang et al., 2023b; Fu et al., 2023), tool manipulation (Tang et al., 2023), and web nevigation (Zhou et al., 2023; Deng et al., 2023a). That is, they solely focus on interacting with either the environment or humans, often on a specific task, overlooking the funda- mental importance of both elements in LLM interaction. Different from prior work, MINT covers a range of diverse tasks and is designed to measure the multi-turn interaction capabilities of LLMs with both tools and user feedback that are more aligned with real-world applications. # 5 CONCLUSION In this work, we present MINT, an evaluation benchmark designed to evaluate LLMâ s task-solving ability in multi-turn interaction by using tools and leveraging natural language feedback, which we
2309.10691#28
2309.10691#30
2309.10691
[ "2308.12950" ]
2309.10691#30
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
9 Preprint. simulate using GPT-4. We hope MINT can serve as a helpful resource to help track progress and incentivize future research in improving LLMâ s multi-turn task-solving capabilities. We refer to §A for a discussion of limitations and future work. # REFERENCES Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. # https://www.googlecloudcommunity.com/gc/AI-ML/ https://www.googlecloudcommunity.com/gc/AI-ML/ Bard API. URL Google-Bard-API/m-p/538517/. # ChatGPT Plugins. URL https://openai.com/blog/chatgpt-plugins.
2309.10691#29
2309.10691#31
2309.10691
[ "2308.12950" ]
2309.10691#31
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. CoRR, abs/2211.12588, 2022. doi: 10.48550/arXiv.2211.12588. URL https://doi.org/10. 48550/arXiv.2211.12588.
2309.10691#30
2309.10691#32
2309.10691
[ "2308.12950" ]
2309.10691#32
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. Theoremqa: A theorem-driven question answering dataset. arXiv preprint arXiv:2305.12524, 2023. Claude API. URL https://docs.anthropic.com/claude/reference/ getting-started-with-the-api. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew J. Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam In Tristan Cazenave, Trischler. Textworld: A learning environment for text-based games.
2309.10691#31
2309.10691#33
2309.10691
[ "2308.12950" ]
2309.10691#33
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Abdallah Saffidine, and Nathan R. Sturtevant (eds.), Computer Games - 7th Workshop, CGW 2018, Held in Conjunction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers, volume 1017 of Communications in Computer and Information Science, pp. 41â 75. Springer, 2018. doi: 10.1007/ 978-3-030-24337-1\ 3. URL https://doi.org/10.1007/978-3-030-24337-1_3. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web.
2309.10691#32
2309.10691#34
2309.10691
[ "2308.12950" ]
2309.10691#34
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
CoRR, abs/2306.06070, 2023a. doi: 10.48550/arXiv.2306.06070. URL https://doi.org/10.48550/arXiv.2306.06070. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023b.
2309.10691#33
2309.10691#35
2309.10691
[ "2308.12950" ]
2309.10691#35
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32):e2123433119, 2022. Patrick Fernandes, Aman Madaan, Emmy Liu, Ant´onio Farinhas, Pedro Henrique Martins, Amanda Bertsch, Jos´e G. C. de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, and Andr´e F. T. Martins. Bridging the gap:
2309.10691#34
2309.10691#36
2309.10691
[ "2308.12950" ]
2309.10691#36
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
A survey on integrating (human) feedback for natural language gen- eration. CoRR, 2023. 10 Preprint. Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation with self-play and in-context learning from ai feedback, 2023. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. PAL: program-aided language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol- ume 202 of Proceedings of Machine Learning Research, pp. 10764â
2309.10691#35
2309.10691#37
2309.10691
[ "2308.12950" ]
2309.10691#37
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
10799. PMLR, 2023. URL https://proceedings.mlr.press/v202/gao23f.html. Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. CRITIC: large language models can self-correct with tool-interactive critiquing. CoRR, abs/2305.11738, 2023. doi: 10.48550/arXiv.2305.11738. URL https://doi.org/10. 48550/arXiv.2305.11738. Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. CoRR, abs/2305.11554, 2023. doi: 10.48550/ arXiv.2305.11554. URL https://doi.org/10.48550/arXiv.2305.11554. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021. Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Zhou Zhao, and Shinji Watanabe. Audiogpt: Understanding and generating speech, music, sound, and talking head.
2309.10691#36
2309.10691#38
2309.10691
[ "2308.12950" ]
2309.10691#38
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
CoRR, abs/2304.12995, 2023a. doi: 10.48550/arXiv.2304.12995. URL https://doi.org/10.48550/arXiv. 2304.12995. Shulin Huang, Shirong Ma, Yinghui Li, Mengzuo Huang, Wuhe Zou, Weidong Zhang, and Hai-Tao Zheng. Lateval: An interactive llms evaluation benchmark with incomplete information from lateral thinking puzzles. arXiv preprint arXiv:2308.10855, 2023b. Ehsan Kamalloo, Aref Jafari, Xinyu Zhang, Nandan Thakur, and Jimmy Lin.
2309.10691#37
2309.10691#39
2309.10691
[ "2308.12950" ]
2309.10691#39
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
HAGRID: A human-llm collaborative dataset for generative information-seeking with attribution. CoRR, abs/2307.16883, 2023. doi: 10.48550/arXiv.2307.16883. URL https://doi.org/10. 48550/arXiv.2307.16883. Mina Lee, Percy Liang, and Qian Yang. Coauthor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. In Simone D. J. Barbosa, Cliff Lampe, Caroline Appert, David A. Shamma, Steven Mark Drucker, Julie R. Williamson, and Koji Yatani (eds.), CHI â
2309.10691#38
2309.10691#40
2309.10691
[ "2308.12950" ]
2309.10691#40
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, pp. 388:1â 388:19. ACM, 2022a. doi: 10.1145/3491102.3502030. URL https://doi.org/10.1145/3491102.3502030. Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael S. Bernstein, and Percy Liang.
2309.10691#39
2309.10691#41
2309.10691
[ "2308.12950" ]
2309.10691#41
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Evaluating human-language model interaction. CoRR, abs/2212.09746, 2022b. doi: 10.48550/arXiv.2212.09746. URL https://doi.org/10.48550/arXiv.2212.09746. Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M. Dai. Mindâ s eye:
2309.10691#40
2309.10691#42
2309.10691
[ "2308.12950" ]
2309.10691#42
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Grounded language model reasoning through simula- tion. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023a. URL https://openreview.net/pdf? id=4rXMRuoJlai. 11 Preprint. Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, and Jie Tang. Webglm: Towards an efficient web-enhanced question answering system with human preferences. In Ambuj Singh, Yizhou Sun, Leman Akoglu, Dimitrios Gunopulos, Xifeng Yan, Ravi Kumar, Fatma Ozcan, and Jieping Ye (eds.), Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, pp. 4549â 4560. ACM, 2023b. doi: 10.1145/3580305.3599931. URL https: //doi.org/10.1145/3580305.3599931. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents. CoRR, abs/2308.03688, 2023c. doi: 10.48550/ arXiv.2308.03688. URL https://doi.org/10.48550/arXiv.2308.03688. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, arXiv preprint Kaiwen Men, Kejuan Yang, et al.
2309.10691#41
2309.10691#43
2309.10691
[ "2308.12950" ]
2309.10691#43
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Agentbench: Evaluating llms as agents. arXiv:2308.03688, 2023d. Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large lan- guage models. CoRR, abs/2304.09842, 2023. doi: 10.48550/arXiv.2304.09842. URL https: //doi.org/10.48550/arXiv.2304.09842.
2309.10691#42
2309.10691#44
2309.10691
[ "2308.12950" ]
2309.10691#44
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Gr´egoire Mialon, Roberto Dess`ı, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozi`ere, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. Webgpt: Browser-assisted question-answering with human feedback.
2309.10691#43
2309.10691#45
2309.10691
[ "2308.12950" ]
2309.10691#45
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
CoRR, abs/2112.09332, 2021. URL https://arxiv.org/abs/2112.09332. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023. OpenAI. Gpt-4 technical report, 2023. # OpenAI API. URL https://openai.com/blog/openai-api.
2309.10691#44
2309.10691#46
2309.10691
[ "2308.12950" ]
2309.10691#46
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022a. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe.
2309.10691#45
2309.10691#47
2309.10691
[ "2308.12950" ]
2309.10691#47
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Training language models to follow instructions with human feedback. In NeurIPS, 2022b. URL http://papers.nips.cc/paper_files/paper/2022/ hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html. Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. Fact-checking complex claims with program-guided reasoning. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 6981â
2309.10691#46
2309.10691#48
2309.10691
[ "2308.12950" ]
2309.10691#48
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
7004. Association for Computational Linguistics, doi: 10.18653/v1/2023.acl-long.386. URL https://doi.org/10.18653/v1/ 2023. 2023.acl-long.386. 12 Preprint. Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco T´ulio Ribeiro. ART: automatic multi-step reasoning and tool-use for large language models. CoRR, abs/2303.09014, 2023. doi: 10.48550/arXiv.2303.09014. URL https://doi. org/10.48550/arXiv.2303.09014.
2309.10691#47
2309.10691#49
2309.10691
[ "2308.12950" ]
2309.10691#49
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Aaron Parisi, Yao Zhao, and Noah Fiedel. TALM: tool augmented language models. CoRR, abs/2205.12255, 2022. doi: 10.48550/arXiv.2205.12255. URL https://doi.org/10. 48550/arXiv.2205.12255. Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. Gorilla:
2309.10691#48
2309.10691#50
2309.10691
[ "2308.12950" ]
2309.10691#50
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Large language model connected with massive apis. CoRR, abs/2305.15334, 2023. doi: 10.48550/arXiv.2305.15334. URL https://doi.org/10.48550/arXiv.2305.15334. Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Apostolos Dedeloudis, Jackson Sargent, and David Jurgens.
2309.10691#49
2309.10691#51
2309.10691
[ "2308.12950" ]
2309.10691#51
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Potato: The portable text annotation tool. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2022. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. Check your facts and try again: Improving large language models with external knowledge and automated feedback. CoRR, abs/2302.12813, 2023. doi: 10.48550/arXiv.2302.12813. URL https://doi.org/10.48550/arXiv. 2302.12813. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun.
2309.10691#50
2309.10691#52
2309.10691
[ "2308.12950" ]
2309.10691#52
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Tool learning with foundation models. In arxiv, 2023a. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023b. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023c. Machel Reid and Graham Neubig. Learning to model editing processes. In Findings of the Association for Computational. Association for Computational Linguistics, 2022. Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al.
2309.10691#51
2309.10691#53
2309.10691
[ "2308.12950" ]
2309.10691#53
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. CoRR, abs/2206.05802, 2022. doi: 10.48550/arXiv.2206.05802. URL https://doi.org/10.48550/arXiv.2206.05802.
2309.10691#52
2309.10691#54
2309.10691
[ "2308.12950" ]
2309.10691#54
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023a. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761.
2309.10691#53
2309.10691#55
2309.10691
[ "2308.12950" ]
2309.10691#55
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools, 2023b. Timo Schick, Jane A. Yu, Zhengbao Jiang, Fabio Petroni, Patrick S. H. Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. PEER: A collab- orative language model. In The Eleventh International Conference on Learning Representations,
2309.10691#54
2309.10691#56
2309.10691
[ "2308.12950" ]
2309.10691#56
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
13 Preprint. ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023c. URL https:// openreview.net/pdf?id=KbYevcLjnc. # ShareGPT data, 2023. URL https://huggingface.co/datasets/anon8231489123/ ShareGPT_Vicuna_unfiltered. Noah Shinn, Beck Labash, and Ashwin Gopinath.
2309.10691#55
2309.10691#57
2309.10691
[ "2308.12950" ]
2309.10691#57
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre CË ot´e, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020. Lei Shu, Liangchen Luo, Jayakumar Hoskere, Yun Zhu, Canoee Liu, Simon Tong, Jindong Chen, and Lei Meng. Rewritelm: An instruction-tuned large language model for text rewriting. CoRR, abs/2305.15685, 2023. doi: 10.48550/arXiv.2305.15685. URL https://doi.org/10. 48550/arXiv.2305.15685. Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Na- man Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston.
2309.10691#56
2309.10691#58
2309.10691
[ "2308.12950" ]
2309.10691#58
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Blender- bot 3: a deployed conversational agent that continually learns to responsibly engage. CoRR, abs/2208.03188, 2022. doi: 10.48550/arXiv.2208.03188. URL https://doi.org/10. 48550/arXiv.2208.03188. Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li.
2309.10691#57
2309.10691#59
2309.10691
[ "2308.12950" ]
2309.10691#59
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Rest- gpt: Connecting large language models with real-world applications via restful apis. CoRR, abs/2306.06624, 2023. doi: 10.48550/arXiv.2306.06624. URL https://doi.org/10. 48550/arXiv.2306.06624. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun.
2309.10691#58
2309.10691#60
2309.10691
[ "2308.12950" ]
2309.10691#60
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Toolalpaca: Gen- eralized tool learning for language models with 3000 simulated cases. CoRR, abs/2306.05301, 2023. doi: 10.48550/arXiv.2306.05301. URL https://doi.org/10.48550/arXiv. 2306.05301. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
2309.10691#59
2309.10691#61
2309.10691
[ "2308.12950" ]
2309.10691#61
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
US Bureau of Labor Statistics. Table b-3. average hourly and weekly earnings of all employees on private nonfarm payrolls by industry sector, seasonally adjusted, 2023. URL https://www. bls.gov/news.release/empsit.t19.htm. Accessed: 2023-9-3. Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, and Heng Ji. Leti: Learning to generate from textual interactions. arXiv preprint arXiv:2305.10314, 2023a. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Large language models are cognitive synergists: Task solving through multi-persona self-collaboration. In arxiv, 2023b. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le.
2309.10691#60
2309.10691#62
2309.10691
[ "2308.12950" ]
2309.10691#62
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id= gEZrGCozdqR. Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, and Tao Yu.
2309.10691#61
2309.10691#63
2309.10691
[ "2308.12950" ]
2309.10691#63
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Lemur: Harmonizing natural language and code for language agents, 2023. John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023a. 14 Preprint. Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar.
2309.10691#62
2309.10691#64
2309.10691
[ "2308.12950" ]
2309.10691#64
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Leandojo: Theorem proving with retrieval-augmented language models. CoRR, abs/2306.15626, 2023b. doi: 10.48550/arXiv.2306.15626. URL https://doi.org/10.48550/arXiv.2306.15626. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning.
2309.10691#63
2309.10691#65
2309.10691
[ "2308.12950" ]
2309.10691#65
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600, 2018. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2022. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.
2309.10691#64
2309.10691#66
2309.10691
[ "2308.12950" ]
2309.10691#66
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig. Webarena: A realistic web environ- ment for building autonomous agents. CoRR, abs/2307.13854, 2023. doi: 10.48550/arXiv.2307. 13854. URL https://doi.org/10.48550/arXiv.2307.13854.
2309.10691#65
2309.10691#67
2309.10691
[ "2308.12950" ]