doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.10621 | 23 | # 4 RESULTS
After running the prompt, the generated label was converted to a score in [0, 2]. Where we generated multiple labels, the final score is simply the mean. In keeping with the TREC guidelines, if we prompted for aspects we still considered only the overall label. If the model generated unparseable output, we dropped the result entirely: this happened in 90 out of 96 000 cases.
TREC-Robust included two sets of topics. Topics up to 650 came from earlier editions of TREC, and had only binary relevance judgements (ârelevantâ or ânon-relevantâ; 1 or 0). Topics 651â700 were developed for the track, and have three-level judgements (adding âhighly relevantâ, 2). Our prompts generated a scores from 0 to 2 for all documents, in line with instructions to TREC-Robust assessors for the new topics. Since comparisons are difficult between a three- and a two-level scale, we follow TREC and Faggioli et al. [2023] by considering ârelevantâ and âhighly relevantâ together, i.e. by binarising the scores in all cases. | 2309.10621#23 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10818 | 23 | 7
performed per data source is presented in Table 2. The initial implementation of MinHashLSH did not scale to trillion token datasets like RedPajama with- out running out of memory. This is overcome by optimizing the memory usage and parallelization to perform deduplication on 64 CPU cores with 1.4TB GB peak memory usage, which can be easily decreased by creating multiple Min- HashLSH objects to query.
# 3 Dataset Combination Configurations
# 3.1 SlimPajama | 2309.10818#23 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 24 | Furthermore, in order to efficiently scale our training cluster to thousands of GPUs, we integrate the following techniques to avoid the degradation of communication efficiency: ⢠Topology-aware distributed training. In large- scale clusters, network connections frequently span multiple layers of switches. We strategically arrange the ranks for distributed training to minimize frequent access across different switches, which reduces latency and thereby enhances overall training efficiency.
⢠Hybrid and hierarchical partition for ZeRO. across GPUs, By partitioning parameters ZeRO3 reduces memory consumption at the expense of additional all-gather communications. This approach would lead to a significant communication bottleneck when scaling to thousands of GPUs (Jiang et al., 2023a). To address this issue, we propose a hybrid and hierarchical partitioning scheme. Specifically, our framework first partitions the optimizer states across all GPUs, and then adaptively decides which layers need to activate ZeRO3, and whether partitioning parameters hierarchically. By integrating these strategies, our system is capable of training Baichuan 2-7B and Baichuan 2-13B models efficiently on 1,024 NVIDIA A800
GPUs, achieving a computational efficiency that exceeds 180 TFLOPS.
# 3 Alignment | 2309.10305#24 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 24 | We evaluate the quality of these labels (not the documents) in three ways: by comparing the modelâs labels for each document to the labels from TREC assessors, by comparing the aggregated scores for each query, and by comparing the overall system rankings that result.
Large language models can accurately predict searcher preferences
Model 0 1 or 2 866 405 95 1585 TREC assessor
0 1 or 2 Table 1. Results from the best-performing prompt of Figure 1âi.e. with descriptions, narrative, and aspects, prompt â-DNA-ââover a stratified sample of the TREC Robust data. Overall, the LLM is more likely to say ânot relevantâ than were TREC assessors; an LLM assessment of ârelevantâ or âhighly relevantâ is reliable. Some qrels are missing due to unparsable LLM output, a rate of 1.6%.
# 4.1 Comparing scores | 2309.10621#24 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 24 | Metric. We consider Success Rate SR as our evaluation metric, which measures the percentage of successful task instances. For interaction limit k, we start from scratch and allow each LLM to interact up to the k-th turn and measure their corresponding SRk. Unless otherwise noted, we limit k â [1, 5] where k = 1 means no interaction and k = 5 maximizes interaction turns within most modern LLMsâ context window (4,096 tokens).
3.2 MEASURING LLMâS TOOL-AUGMENTED TASK-SOLVING IN MULTI-TURN INTERACTION | 2309.10691#24 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 24 | # 3 Dataset Combination Configurations
# 3.1 SlimPajama
Combination Strategies. As shown in Table 3, the adjusted domain weights establish a new training distribution. Using this distribution, we adopt a stan- dard training approach to learn a consistent model architecture. This archi- tecture remains unchanged across various domain weights and is trained us- ing data from diverse combination distributions. Across different setups, we maintain the total training tokens to be the same. Our examination of domain weights in large language model training focuses on three main areas: 1) In- crementally increasing the diversity of source combinations, as seen in con- figurations 1, 2, and 3. 2) With consistent data sources, we explore varying domain proportions as presented in configurations 2, 4, and 5. 3) We assess the significance of individual domain sources concerning the final modelâs perfor- mance. Note that given the minimal impact of ArXiv and StackExchange, we have opted to omit them from the ablations in configuration 3 to conserve train- ing resources and keep relatively sufficient training tokens for CommonCrawl. The detailed configurations are as follows:
⢠Configuration-1: 330B CommonCrawl
⢠Configuration-2: 300B CommonCrawl + 30B Github
⢠Configuration-3: 250B CommonCrawl + 30B Github + 26B Books + 24B Wikipedia | 2309.10818#24 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 25 | GPUs, achieving a computational efficiency that exceeds 180 TFLOPS.
# 3 Alignment
Baichuan 2 also introduces the alignment procedure resulting in two chat models: Baichuan 2-7B-Chat and Baichuan 2-13B-Chat. The alignment process of the Baichuan 2 encompasses two main components: Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF).
# 3.1 Supervised Fine-Tuning
During the supervised fine-tuning phase, we use human labelers to annotate prompts gathered from various data sources. Each prompt is labeled as being helpful or harmless based on key principles similar to Claude (2023). To validate data quality, we use cross-validationâan authoritative annotator checks the quality of a sample batch annotated by a specific crowd worker group, rejecting any batches that do not meet our quality standards.
We collected over 100k supervised fine-tuning samples and trained our base model on them. Next, we delineated the reinforcement learning process via the RLHF method to further improve results. The whole process of RLHF, including RM and RL training, is shown in Figure 5. | 2309.10305#25 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 25 | # 4.1 Comparing scores
Similar to Faggioli et al. [2023], we compare these model-generated scores to scores from the TREC assessors. As an example, Table 1 gives a confusion matrix for one prompt and all 3000 query:document pairs. (There are 32 such matrices, one for each set of prompt features or equivalently one for each row of Table 2.) We can see that in this case, the LLM is more likely to say ânot relevantâ than were TREC assessors (44% vs 33%), and is correspondingly inaccurate (68% agreement with TREC, when the LLM says ânot relevantâ). An LLM assessment of ârelevantâ or âhighly relevantâ however, is reliable (94% agreement). | 2309.10621#25 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 25 | 3.2 MEASURING LLMâS TOOL-AUGMENTED TASK-SOLVING IN MULTI-TURN INTERACTION
We ask LLMs to solve tasks (§2.2) with different interaction limits k â {1, 2, 3, 4, 5} without natural language feedback (Fig. 1 without red dotted box), and quantify LLMsâ tool-augmented task-solving capability by (1) absolute performance SR5 and (2) improvement per additional interaction turn k(b · k + a â SRk)2 (Tab. 2). âtools estimated as the slope b from least-square regression minb,a Since the underlying SRk vs. k relationship might not be linear, we only use the regression coef- ficient (with R2) as a rough estimate of the improvement rate to complement the absolute success rate SR5 for a more comprehensive understanding of the modelsâ capabilities. | 2309.10691#25 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 25 | ⢠Configuration-2: 300B CommonCrawl + 30B Github
⢠Configuration-3: 250B CommonCrawl + 30B Github + 26B Books + 24B Wikipedia
⢠Configuration-4: 250B CommonCrawl + 80B Github (adjust sampling proportion)
⢠Configuration-5: 250B CommonCrawl + 80B Wikipedia (adjust sampling proportion)
⢠Configuration-6: 330B RefinedWeb CommonCrawl
# 3.2 RefinedWeb
RefinedWeb [27] is a massive English web dataset that is constructed using rigorous filtering and extensive deduplication of CommonCrawl. We use it as the comparison to our SlimPajama-DC CommonCrawl-only training.
8 | 2309.10818#25 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 26 | 2 Ee Ch Human Guideline Reward Mode! Response 1 Response 2 i Score 1 Score 2 Strain Prompt i L {| Response 3 Response4 Score 3 âScore 4 Data Model Pool || == Variants âSave Checkpoints <â___ââ_ Po Prompt/ Stato i
Figure 5: An illustration of Baichuan 2âs RLHF process.
# 3.2 Reward Model
We devised a three-tiered classification system for all prompts, consisting of 6 primary categories, 30 secondary categories, and over 200 tertiary categories. From the userâs perspective, we aim for the classification system to comprehensively cover all types of user needs. From the standpoint of reward model training, prompts within each
Score Gap Test Acc. 3 54.5% 61.1% 70.2% 77.8% 81.5% 1 2 4 5
Table 4: Reward Model test accuracy on different score gaps of two responses. The larger the response gap, the better RM accuracy. The gap 1,2,3,4,5 correspond to unsure, negligibly better, slightly better, better, and significantly better, respectively.
category should have sufficient diversity to ensure the reward model can generalize well. | 2309.10305#26 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 26 | Table 2 summarises the modelsâ agreement with human judges, over the 3000 query:document pairs, as we manipulate the prompt as above: there is one row for each prompt, identified by which optional features are included. For example, the row labelled â--N-Mâ corresponds to the prompt with narrative and multiple judges, but not role statement, description, or aspects. For each prompt, we report the three document-level, one query-level, and one system-level metrics described above, plus a 95% confidence interval based on 20 bootstraps over documents. The best-performing prompt for each metric is labelled with a â
, and these are significantly better than any other (ð¡ test, ð < 0.05).
Performance is highly variable as we change the featuresâthat is, the quality of the labelling depends a great deal on the prompt structure or template. For example, Cohenâs ð
varies from as low as 0.20 (prompt âR---Mâ) to 0.64 (prompt â-DNA-â). We need to be accordingly careful interpreting any claim based on a single prompt, especially where that prompt has not been tuned against some existing labels; we also observe this in the variable performance reported in Liang et al. [2022], for example. | 2309.10621#26 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 26 | Overall Observations. In Fig. 2, we find all open-source models fall behind best commercial closed-source models in both SR5 and âtools, with claude-2 and claude-instant-1 sur- passing all open-source LLMs in âtools with high R2, suggesting near-linear improvement. Notably, despite performing badly at k = 1, claude-instant-1 surpasses claude-2 as k increases
6According to https://docs.anthropic.com/claude/reference/selecting-a-model, we use version v1.2 for claude-instant-1 and v2.0 for claude-2.
5
Preprint.
Table 2: Tool-augmented task-solving success rate with different interaction limit k (i.e., max num- ber of interaction turns allowed) and improvement rate (estimated with least-square regression coef- ficient, regression R2 is also included). The slope (i.e., coefficient) indicates the rate of improvement while R2 denotes the goodness of fit of the regression model to the data. | 2309.10691#26 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 26 | 8
SlimPajama RefinedWeb Total (Tokens) sub dataset Commoncrawl C4 GitHub Books ArXiv Wikipedia StackExchange Commoncrawl DC-2 DC-3 DC-4 DC-5 DC-1 100.0% 90.9% 75.8% 75.8% 75.8% 0.0% 0.0% 0.0% 0.0% 9.1% 24.2% 0.0% 0.0% 0.0% 0.0% 7.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 24.2% 7.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 330B 330B 330B 330B 330B DC-6 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 9.1% 0.0% 0.0% 0.0% 0.0% 0.0% 330B
Table 3: Six configurations of sub-dataset combinations in SlimPajama.
# 4 Network Architecture and Training Details
# 4.1 Network Architecture | 2309.10818#26 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 27 | category should have sufficient diversity to ensure the reward model can generalize well.
Given a prompt, responses are generated by Baichuan 2 models of different sizes and stages (SFT, PPO) to enhance response diversity. Only responses generated by the Baichuan 2 model family are used in the RM training. Responses from other open-source datasets and proprietary models do not improve the reward modelâs accuracy. This also underscores the intrinsic consistency of the Baichuan 2 model series from another perspective. The loss function used for training the reward in InstructGPT model The reward model (Ouyang et al., 2022). derived from training exhibits a performance consistent with that of LLaMA 2 (Touvron et al., 2023b), the greater the score difference between two responses, the higher the discriminative accuracy of the reward model, as shown in Table 4.
# 3.3 PPO
After obtaining the reward model, we employ the PPO (Schulman et al., 2017) algorithm to train our language model. We employ four models: the actor model (responsible for generating responses), the reference model (used to compute the KL penalty with fixed parameters), the reward model (providing an overarching reward for the entire response with fixed parameters), and the critic model (designed to learn per-token values).
# 3.4 Training Details | 2309.10305#27 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10691 | 27 | Models Size Type SR (Micro-averaged across tasks) k = 3 k = 1 k = 2 k = 4 k = 5 Improvement Rate R2 Slope Open-source LLM 7B Baseâ SIFT 0.3 0.3 4.1 7.8 7.2 10.2 7.2 9.7 4.3 +1.1 8.7 +1.9 0.38 0.53 CodeLLaMA 13B Base SIFTâ 0.5 1.5 13.7 12.6 17.9 13.1 19.3 15.0 18.4 +4.1 14.5 +2.8 0.70 0.64 34B Base SIFTââ 0.2 2.6 16.2 10.1 23.0 14.7 25.9 15.4 28.2 +6.6 17.1 +3.4 0.85 0.86 7B Base RLHFâ 0.2 1.0 5.6 4.3 7.3 6.7 8.9 6.5 9.7 +2.2 7.3 +1.5 0.87 0.83 LLaMA-2 13B Base RLHF 0.2 4.1 11.4 12.5 15.5 12.5 15.2 13.3 14.5 +3.2 11.9 +1.7 | 2309.10691#27 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 27 | Table 3: Six configurations of sub-dataset combinations in SlimPajama.
# 4 Network Architecture and Training Details
# 4.1 Network Architecture
Cerebras-GPT Architecture [11]. Cerebras-GPT architecture shares similari- ties with those built on GPT-3 [4], particularly in the use of an autoregressive transformer decoder. However, a key difference lies in the attention mecha- nism employed. While GPT-3 utilizes a mix of dense and sparse-banded atten- tion, Cerebras-GPT consistently uses dense attention across all decoder blocks. In terms of model dimensions, we either adhere to an aspect ratio of approxi- mately 80 (dmodel/nlayers) or maintain dimensions that are congruent with GPT- 3 models. Additionally, all of our models are trained to handle a maximum sequence length of 2,048 tokens. The detailed architecture is shown in Table 4. Alibi [28]. Alibi introduces a more streamlined and efficient positional ap- proach called Attention with Linear Biases. Rather than adding positional em- beddings to word embeddings, ALiBi applies a bias to query-key attention scores, penalizing them based on their distance. SwiGLU [32]. SwiGLU is an activation function which is a variant of GLU [9]. The formulation is as follows: | 2309.10818#27 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 28 | # 3.4 Training Details
During the RLHF training process, the critic model is warmed up with an initial 20 training steps ahead. Subsequently, both the critic and actor models are updated via the standard PPO algorithm. For all models, we use gradient clipping of 0.5, a constant learning rate of 5e-6, and a PPO clip threshold ϵ = 0.1. We set the KL penalty coefficient β = 0.2, decaying to 0.005 over steps. We train for 350 iterations for all our chat models, resulting in Baichuan 2-7B-Chat and Baichuan 2-13B-Chat.
# 4 Safety
We believe that model safety improvements stem not only from constraints during data cleansing or alignment stages but also from harnessing positive knowledge and identifying negative knowledge during all training stages. Guided by this concept, we have enhanced model safety throughout the Baichuan 2 training process.
# 4.1 Pre-training Stage
In the pre-training stage, we pay close attention to data safety. The entire pre-training dataset underwent a rigorous data filtering process aimed at enhancing safety. We devised a system of rules and models to eliminate harmful content such as violence, pornography, racial discrimination, hate speech, and more. | 2309.10305#28 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 28 | favourably to reports from Cormack et al. [1998], who labelled TREC ad-hoc documents a second time, using a second group of assessors. From their data we can compute Cohenâs ð
= 0.52 between two groups of trained human assessors. On other data sets, Castillo et al. [2006] report ð
= 0.56 labelling web pages for spam; Hersh et al. [1994] report ð
= 0.41 on relevance in the OHSUMED collection; Agarwal et al. [2019] saw ð
= 0.44 for news sentiment; and Scholer et al. [2013] reported that assessors seeing a document for a second time only agreed with their first label 52% of the time. Faggioli et al. [2023] reported ð
from 0.26 to 0.40 on binarised labels from TREC-8 and TREC Deep Learning. Faggioli et al. used another LLM but with relatively simple prompt, reinforcing LLMsâ sensitivity to their prompt.
On this metric, at least, we can conclude that with minimal iterations LLMs are already at human quality for this collection and for some prompts. In Section 5 we will see that, in a common setting, LLMs can perform substantially better than third-party judges. | 2309.10621#28 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 28 | RLHF 0.2 4.1 11.4 12.5 15.5 12.5 15.2 13.3 14.5 +3.2 11.9 +1.7 0.63 0.47 70B Base RLHF 1.9 4.3 19.4 14.3 24.6 15.7 26.4 16.6 26.4 +5.6 17.9 +3.0 0.73 0.73 Lemur-v1 Vicuna-v1.5 Base SIFT SIFTâ 7B 13B SIFTâ 70B 1.0 3.8 0.0 0.0 17.9 27.0 6.7 2.2 23.6 35.7 12.3 4.4 25.3 37.5 15.4 6.7 26.3 +5.8 37.0 +7.7 12.6 +3.4 8.4 +2.1 0.77 0.73 0.77 1.00 Closed-source LLM chat-bison-001 - claude-2 - claude-instant-1 - gpt-3.5-turbo-0613 - gpt-4-0613 - -â - - - - 0.3 26.4 12.1 2.7 - 15.9 35.5 32.2 16.9 - 14.2 | 2309.10691#28 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 28 | SwiGLU(x, W, V, b, c, β) = Swishβ(xW + b) â (xV + c) (1)
where x is a vector of the hidden representation at a particular position in the sequence. W, V, b, c are the matrices and bias vectors, respectively.
Model GPT-3 XL Our DC GPT-3 LLaMA Our LBS n params 1.3B 1.3B 6.7B 6.7B 6.7B n layers d model 24 24 32 32 32 2,048 2,048 4,096 4,096 4,096 n heads d heads 24 24 32 32 32 128 128 128 128 128 batch size 1M 2M 2M 4M 14.3M
learning rate 2.0Ã10-4 1.2Ã10-2 1.2Ã10-4 3.0Ã10-4 1.8Ã10-4
Table 4: Detailed model sizes, architectures, and optimization hyper- parameters. Our LBS model details are presented in Sec. 6.
9
# 4.2 Training Details | 2309.10818#28 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 29 | Furthermore, we curated a Chinese-English bilingual dataset comprising several million webpages from hundreds of reputable websites that represent various positive value domains, encompassing areas such as policy, law, vulnerable groups, general values, traditional virtues, and more. We also heightened the sampling probability for this dataset.
# 4.2 Alignment Stage
We build a red-teaming procedure consisting of 6 types of attacks and 100+ granular safety value categories, an expert annotation team of 10 with traditional internet security experience initialized safe alignment prompts. The relevant snippets from the pre-training dataset were retrieved to create responses, resulting in approximately 1K annotated data for initialization. ⢠The expert annotation team guided a 50-person outsourced annotation team through red-blue confrontation with the initialized alignment model, resulting in the generation of 200K attack prompts.
specialized multi-value supervised sampling method, we maximized the utilization of attack data to generate responses at varying safety levels. During the RL optimization stage, we also take
During the RL optimization stage, we also take safety into the first account:
# safety into the first account: ⢠At
the onset of safety reinforcement, DPO (Rafailov et al., 2023) methods efficiently employed limited amounts of annotated data to enhance performance concerning specific vulnerability issues.
⢠By employing a Reward Model that integrates Helpful and Harmless objectives, PPO safety reinforcement training was conducted.
# 5 Evaluations | 2309.10305#29 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10691 | 29 | - -â - - - - 0.3 26.4 12.1 2.7 - 15.9 35.5 32.2 16.9 - 14.2 36.0 39.2 24.1 - 13.0 39.8 44.4 31.7 - 14.5 +2.5 39.9 +3.1 45.9 +8.0 36.2 +8.2 69.5 - 0.40 0.81 0.84 0.96 * Evaluated LLM failed to produce parsable output as instructed in some cases. See §3.5 and Tab. A.7 for details. â We identified potential undesired artifacts in its training data, which hurt its performance. See §3.5 for details. | 2309.10691#29 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 29 | 9
# 4.2 Training Details
Tokenizer. We use an adapted GPT-NeoX [2] BPE-based tokenizer similar to that used in GPT-2 for all of our experiments, which has a vocabulary size of 50,277. Our entire training dataset for each configuration contains 330B tokens after tokenization, and each model takes about 2.5 days on Cerebras 16à CS-2S cluster. Optimizer. We employ the AdamW optimizer [26] to train our models, adopt- ing these specific hyper-parameters: β1 = 0.9, β2 = 0.95, and eps = 1.0e-08. Our chosen learning rate follows a linear scheduler, culminating in a final learning rate thatâs 10% of its peak value. Additionally, we apply a weight decay of 0.1, limit the gradient using a clip value of 1.0, and implement a 150-step warmup. Other Hyperparameters. In our model, the filter size is 5,461, hidden size is 2,048 and attention dropout rate is 0. SwiGLU is used as the nonlinearity and alibi is used for position embedding. Mixed precision and bfloat16 are employed during model training. More hyperparameters are shown in Table 4.
# 5 Results and Analysis | 2309.10818#29 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 30 | ⢠By employing a Reward Model that integrates Helpful and Harmless objectives, PPO safety reinforcement training was conducted.
# 5 Evaluations
In this section, we report the zero-shot or few-shot results of the pre-trained base models on standard benchmarks. We evaluate Baichuan 2 on free-form generation tasks and multiple-choice tasks. ⢠Free-form generation: Models are given some sample inputs (shots) and then generate continuations to obtain results, like for question answering, translation, and other tasks.
Multiple-choice: Models are given a question and multiple choices, and the task is to select the most appropriate candidates. Given the variety of tasks and examples, we incorporated open-source evaluation frameworks like lm-evaluation-harness (Gao et al., 2021) and OpenCompass (OpenCompass, 2023) into our in-house implementations for fair benchmarking against other models.
The models we choose to compare have similar sizes to Baichuan 2 and are open-sourced that the results can reproduced: ⢠LLaMA (Touvron et al., 2023b): The language models trained by Meta on 1 trillion tokens. The context length is 2,048 and we evaluate both LLaMA 7B and LLaMA 13B.
⢠LLaMA 2 (Touvron et al., 2023c): A successor model to LLaMA 1 trained on 2 trillion tokens and better data mixture. | 2309.10305#30 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 30 | scores MAE Prompt features â â â â â 0.34± 0.01 0.38± 0.02 R â â â â 0.38± 0.02 â D â â â 0.36± 0.02 â â N â â 0.35± 0.02 â â â A â 0.19± 0.02 â â â â M 0.46± 0.02 0.32± 0.02 0.35± 0.03 0.37± 0.03 0.60± 0.03 0.22± 0.02 0.71± 0.01 0.72± 0.01 0.73± 0.01 0.82± 0.02 0.65± 0.01 R D â â â 0.40± 0.02 R â N â â 0.38± 0.02 R â â A â 0.21± 0.02 R â â â M 0.49± 0.02 â D N â â 0.35± 0.02 â D â A â 0.19± 0.01 â D â â M 0.45± 0.01 â â N A â 0.18± 0.01 â â N â M 0.41± 0.02 â â â A M 0.31± 0.02 0.30± 0.03 0.33± 0.02 0.56± 0.03 | 2309.10621#30 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 30 | to 3, eventually achieving a higher SR5 (45.9% vs. 39.9%), suggesting claude-instant-1âs superior ability to improve with multi-turn interaction.
Absolute performance and improvement-per-turn scale with model size. For open-source CodeLLaMA and LLaMA-2, we observe a trend on all variants (Base, SIFT, and RLHF) that âtools and SR5 increase when scaling up LLMs. As we discuss in §3.5, Vicuna-v1.5 models are an exception, potentially due to their training artifacts that hurt task performance. | 2309.10691#30 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 30 | # 5 Results and Analysis
This section presents the analytical experiments and results on different com- binations of SlimPajama. We first discuss the results following Huggingface Leaderboard Evaluation. Then, we demonstrate the importance of global dedu- plication and a diverse range of data sources in enhancing LLMâs performance by conducting additional comprehensive evaluations across various topics. Fi- nally, we visualize the training loss curves of different data domain combina- tions and provide insights on how they connect to the modelsâ performance.
# 5.1 Huggingface Leaderboard Evaluation with Harness | 2309.10818#30 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 31 | ⢠LLaMA 2 (Touvron et al., 2023c): A successor model to LLaMA 1 trained on 2 trillion tokens and better data mixture.
⢠Baichuan 1 (Baichuan, 2023b): The Baichuan 7B is trained on 1.2 trillion tokens and Baichuan 13B is trained on 1.4 trillion tokens. Both of them focus on English and Chinese.
⢠ChatGLM 2-6B (Zeng et al., 2022): A chat language model that has strong performance on several benchmarks5.
⢠MPT-7B (MosaicML, 2023): An open-source LLMs trained 1 trillion tokens of English text and code.
⢠Falcon-7B (Penedo et al., 2023): A series of LLMs trained on 1 trillion tokens enhanced with curated corpora. It is made available under the Apache 2.0 license.
⢠Vicuna-13B (Chiang et al., 2023): A language model trained by fine-tuning LLaMA-13B on the
5They do not release their base models so we adopt the result they report in their website.
conversational dataset generated by ChatGPT. ⢠Chinese-Alpaca-Plus-13B (Cui et al., 2023): A language model trained by fine-tuning LLaMA- 13B on the conversational dataset generated by ChatGPT. | 2309.10305#31 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 31 | 0.41± 0.02 â â â A M 0.31± 0.02 0.30± 0.03 0.33± 0.02 0.56± 0.03 0.20± 0.02 0.37± 0.02 0.59± 0.03 0.24± 0.02 0.62± 0.02 0.29± 0.02 0.42± 0.04 0.69± 0.01 0.71± 0.01 0.81± 0.02 0.64± 0.01 0.74± 0.01 0.83± 0.01 0.66± 0.01 0.84± 0.01 0.69± 0.01 0.80± 0.02 R D N â â 0.37± 0.02 R D â A â 0.22± 0.01 R D â â M 0.46± 0.02 R â N A â 0.20± 0.01 R â N â M 0.42± 0.02 R â â A M 0.38± 0.02 â D N A â 0.17± 0.01 â D N â M 0.40± 0.02 â D â A M 0.31± 0.01 â â N A M 0.27± 0.02 0.72± 0.02 0.82± 0.01 0.66± | 2309.10621#31 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 31 | SIFT on multi-turn data could be helpful. Despite the issue above, Vicuna-v1.5 (7B, SIFT) does show stronger performance compared to LLaMA-2 (Base and RLHF, 7B) in âtools (+3.4% vs. +2.2% / +1.5%) and 9.7% / 7.3%). Lemur-v1 (70B, SR5 (12.6% vs. SIFT) also shows stronger performance than its Base vari- ant. However, except CodeLLaMA (7B), we do not find similar improvements on CodeLLaMA (SIFT). We hy- pothesize that the performance gain on Vicuna-v1.5 and Lemur-v1 could be attributed to fine-tuning on ShareGPTâs multi-turn human-ChatGPT conversations.
70 ââ: 30 aaa : rs - â â rd Bo Max Number of Interaction Tunas k ° LLaMA2 (708, Base) faude-instant-l (closed-source) e- LLaMA2 (708, RLHF) ~~ gpt-3.5-turbo.0613 (closed-source) â_ââ* _â _â_ââ _ Success Rate, micro-averaged (%)
Figure 2: With an increasing interaction | 2309.10691#31 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 31 | # 5.1 Huggingface Leaderboard Evaluation with Harness
Following the Huggingface Leaderboard Evaluation [12], we also assess our models on four key benchmarks using the Eleuther AI Language Model Eval- uation Harness [14]. This unified framework facilitates the evaluation of gen- erative language models across a broad scope of tasks. Specifically, our tests comprised: 1) AI2 Reasoning Challenge (25-shot) [6]: This entails a series of grade-school level science questions. 2) HellaSwag (10-shot) [41]: This benchmark gauges commonsense inference. While straightforward for humans, with an average accuracy of 95%, it poses challenges for state-of-the-art models. 3) MMLU (5-shot) [16]: Designed to assess a text modelâs multitask proficiency, this test spans 57 diverse tasks, including elementary mathematics, US history, computer science, law, among others. 4) TruthfulQA (0-shot) [23]: This evaluates a modelâs inclination to echo inac- curate information frequently encountered online. However, itâs pertinent to
10
note that within the Harness, TruthfulQA is essentially a 6-shot task, as it con- sistently commences with six examples, even when initialized with zero for the number of few-shot examples. | 2309.10818#31 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10691 | 32 | Figure 2: With an increasing interaction
RLHF could hurt LLM-tool multi-turn interaction. We find that on LLaMA-2 series, RLHF alignment gen- erally hurts modelsâ performance in both âtools (â0.7% to â2.6%) and SR5 (â2.4% to â8.5%), similar to the prior observation that alignment can degrade task perfor- mance (Ouyang et al., 2022b). However, itâs hard to conclude that RLHF in general hurts model performance. We leave it for future work to explore the role of RLHF in multi-turn interaction.
6
Preprint.
3.3 MEASURING LLMâS ABILITY TO LEVERAGE NATURAL LANGUAGE FEEDBACK
On top of LLM-tool interaction, we use gpt-4-0613 to simulate user feedback for evaluated LLMs (Fig. 1 with red dotted box). With a k = 5 interaction limit, we measure the LLMâs ability to leverage natural language feedback using the absolute performance SRfeedback and the performance difference after feedback is given: âfeedback = SRfeedback | 2309.10691#32 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 32 | As shown in Table 5, with the exception of DC-5, our average results are all better than RedPajama-1.3B which is also trained on 330B tokens. Among our combinations, the DC-1 (which relies solely on SlimPajama Commoncrawl) achieves the highest scores for ARC and MMLU among all tested configura- tions. Yet, its performance on TruthfulQA ranks at the bottom. On the other hand, DC-3 obtains the top average accuracy across all SlimPajama data com- binations, while DC-6 stands out with the best results on HellaSwag and supe- rior average performance across the board. A potential strategy to harness the strengths of each configuration might involve a sequential training process on DC-1, DC-3, and DC-6.
Furthermore, SlimPajama is built using global deduplication across all sources. This suggests that merging all domains typically yields better results than se- lective combinations, given the absence of overlaps among different domain datasets. This also highlights the importance of global deduplication and a diverse range of data sources in enhancing LLM overall performance. | 2309.10818#32 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 33 | # 5.1 Overall Performance
This section introduces the overall performance of Baichuan 2 base models compared with other similar-sized models. We choose 8 benchmarks for comparison: MMLU (Hendrycks et al., 2021a) The Massive Multitask Language Understanding consists of a range of multiple-choice questions on academic subjects. C-Eval (Huang et al., 2023) is a comprehensive Chinese evaluation benchmark consists of more than 10k multi-choice questions. CMMLU (Li et al., 2023) is also a general evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of the Chinese language and culture. AGIEval (Zhong et al., 2023) is a human-centric benchmark specifically designed to evaluate general abilities like human cognition and problem-solving. Gaokao (Zhang et al., 2023) is an evaluation framework that utilizes Chinese high school entrance examination questions. BBH (Suzgun et al., 2022) is a suite of challenging BIG-Bench (Srivastava et al., 2022) tasks that the language model evaluations did not outperform the average human-rater. GSM8K (Cobbe et al., 2021) is an evaluation benchmarks that focused on math. HumanEval (Chen et al., 2021) is a docstring-to- code dataset consisting of 164 coding problems that test various aspects of programming logic. | 2309.10305#33 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 33 | R D N A M 0.16± 0.02â
0.51± 0.06
0.77± 0.03
# R D N A M_ 0.16+ 0.02x 0.514 0.06 0.77+ 0.03
Table 2. Performance of the variant prompts of Figure 1, compared to human labels on a stratified sample of the TREC Robust data. R = include role, D = include description, N = include narrative, A = include aspects, M = include multiple âjudgesâ. Accuracy of document scores is measured with mean absolute error and with Cohenâs ð
against TREC assessors on binary labels. Accuracy of document preference is measured with AUC. Accuracy of query and system ordering is measured with RBO, normalised to the range 0â1. Uncertainty is reported as a 95% confidence interval based on 20 bootstraps. â
marks the best prompt in each case (significantly better than the next-best performer, one-sided ð¡ test, ð < 0.05).
Large language models can accurately predict searcher preferences
â0.04 +0.01 +0.06 +0.21 â0.13 | 2309.10621#33 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 33 | Overall Observations. We find no significant difference between open- and closed-source mod- els in terms of âfeedback. Open-source models obtain +1.7 â +17.2% from feedback, while closed-source models obtain +6.5 â +15.2%. However, there is still a gap between them in ab- solute success rate SRfeedback , as the best open-source model Lemur-v1 (70B, SIFT) still lags behind the best closed-source model claude-instant-1 by 8.7%. Surprisingly, we find that CodeLLaMA-34B-base can achieve comparable performance to GPT-4 on decision-making tasks with language feedback from it, showing its strong ability to leverage language feedback.
The effect of SIFT and RLHF. Similar to §3.2, we find that SIFT and RLHF hurt modelsâ ability to leverage feedback. The results on CodeLLaMA (except 7B) and LLaMA-2 show that SIFT/RLHF models all have lower âfeedback and SRfeedback than their base variants. Another two exceptions are Vicuna-v1.5 (7B) and Lemur-v1 (70B). We speculate using multi-turn conversations (ShareGPT) for SIFT contributes to these two exceptions. | 2309.10691#33 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 33 | Model Cerebras-GPT-1.3B [11] GPT-neo-1.3B [3] RedPajama-1.3B [7] DC-1-1.3B DC-2-1.3B DC-3-1.3B DC-4-1.3B DC-5-1.3B DC-6-1.3B Average ARC HellaSwag MMLU TruthfulQA 33.5 36.0 38.0 38.5 38.4 38.6 38.5 37.6 41.0 26.3 31.2 37.2 36.3 33.9 34.7 35.2 33.4 35.1 38.5 48.5 55.8 56.0 55.5 56.0 54.7 53.3 64.7 26.6 24.8 24.9 27.0 25.7 25.6 25.7 26.0 26.2 42.7 39.6 34.3 34.8 38.6 38.0 38.3 37.6 37.9
Table 5: Results of six dataset combination configurations following Hugging- face Leaderboard Evaluation [12] with Harness [14].
# 5.2 More Evaluations | 2309.10818#33 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 34 | For CMMLU and MMLU, we adopt the official implementations and adopt 5-shot for evaluation. For BBH we adopt 3-shot evaluations. For C-Eval, Gaokao, and AGIEval we only select the multiple- choice with four candidates for better evaluations. For GSM8K, we adopt 4-shot testing derived from OpenCompass (OpenCompass, 2023). We also incorporate the result of GPT-46 and GPT-3.5- Turbo7. Unless stated otherwise, the results in this paper were obtained using our internal evaluation tools.
The overall result is shown in Table 1. Compared
6gpt-4-0613 7gpt-3.5-turbo-0613
with other similar-sized open-sourced models, our model has a clear performance advantage. Especially in math and code problems, our model achieves significant improvement over Baichuan 1.
# 5.2 Vertical Domain Evaluations
We also evaluate Baichuan 2 in vertical domains, where we choose the law and medical field as they has been widely studied in recent years.
In the law field, we report scores of JEC-QA (Zhong et al., 2020), which is collected from the National Judicial Examination of China. It contains multiple-choice and multiple-answer questions. For compatibility with our evaluation suite, we only test the multiple-choice questions. | 2309.10305#34 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 34 | Large language models can accurately predict searcher preferences
â0.04 +0.01 +0.06 +0.21 â0.13
Table 3. Performance impact of the optional prompt features in Figure 1, measured using ð
against TREC assessors. All changes are statistically significant and effects are ±0.005 at a 95% CI.
# 4.2 Effect of prompt features
Table 2 gives results for 32 prompt templates, made from turning five features on or off. To try to summarise the effect of each feature individually, Table 3 reports the effect of each feature on ð
âthat is, the effect of including a prompt feature independent of any other features being on or off.
Contrary to our expectations, there is a statistically significant negative effect due to role (R) and multiple âjudgesâ (M): ð
decreases by an average 0.04 and 0.13 respectively. Adding description (D) gives an insubstantial boost (only 0.01 points of ð
). Adding a narrative (N) leads to a boost of 0.04; this is modest, but perhaps the background knowledge of LLMs (especially on well-used public data like this) is enough that the narrative adds little information beyond the | 2309.10621#34 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 34 | # 3.4 MEASURING THE EFFICACY OF DIFFERENT LLMâS ABILITY TO PROVIDE FEEDBACK
Fixing the evaluated model to be gpt-3.5-turbo-0613, we assess seven LLMsâ feedback- providing capability through âfeedback (Tab. 4). Our main finding is that task-solving ability could be orthogonal to feedback-providing ability: LLMâs higher task-solving performance does not guar- antee better feedback-providing capability and vice versa. For example, although GPT-3.5 (16k) performs well in task-solving (SR5 ranked 3rd in Tab. 4), it leads to a performance degrada- tion of â10.4% in GPT-3.5; Similarly, GPT-4 with self-feedback in Tab. 3 also experiences de- graded performance. On the other hand, despite performing the worst in solving tasks in Tab. 4, CodeLLaMA-34B-Instruct can provide feedback that improves the stronger GPT-3.5.
3.5 MINT CAN HELP DETECT FAILURE PATTERNS OF EVALUATED LLMS | 2309.10691#34 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 34 | Table 5: Results of six dataset combination configurations following Hugging- face Leaderboard Evaluation [12] with Harness [14].
# 5.2 More Evaluations
As shown in Table 6, we present additional evaluations across various domains to investigate the fine-grained capabilities offered by different data combina- tions. Except for DC-6 (model trained on RefinedWeb data), incorporating more sources, such as DC-3, typically leads to improved average performance. Upon analysis, we find that specific mixtures excel in particular evaluation benchmarks. For example, DC-1 obtains the highest accuracy in the arc chal- lenge and race. Meanwhile, DC-3 outperforms others in the wsc273, swag, and pawsx, and DC-5 emerges as the top performance in the xstory cloze evalu- ation. Moreover, all of our configurations are superior in the average perfor- mance over the comparisons of GPT-neo-1.3B [3] and RedPajama-1.3B [7].
11 | 2309.10818#34 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 35 | In the medical field, we report scores from two medical benchmarks, MedQA (Jin et al., 2021) and MedMCQA (Pal et al., 2022), as well as average scores from medical-related disciplines in C-Eval (val), MMLU, and CMMLU (abbreviated as CMC). Specifically, MedMCQA is collected from the professional medical board exams in the USA and China, including three subsets, i.e., USMLE, MCMLE and TWMLE, and we report the results of USMLE and MCMLE with five candidates; MedMCQA is collected from from Indian medical entrance exams, and we evaluate multiple-choice questions and report the scores in the dev set. The detail of MedMCQA includes (1) clinical medicine, basic medicine of C-Eval (val), (2) clinical knowledge, anatomy, college medicine, college biology, nutrition, virology, medical genetics, professional medicine of MMLU, (3) anatomy, clinical knowledge, college medicine, genetics, nutrition, traditional chinese medicine, virology of CMMLU. Moreover, all these datasets are evaluated in 5-shot. | 2309.10305#35 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 35 | Aspects (A) give a substantial improvement in ð
against TREC assessors, +0.21. Topicality and trustworthiness are the two aspects we used here, but of course that are not the only aspects that might matter, and we do not claim they are the best selection; in Bing we use several aspects, and measure the LLMâs performance on all of these with good results. In this case it seems likely, in fact, that it is the step-by-step nature of labelling with aspects that gives rise to these improvements rather than the particulars of the aspects themselves.
Note that this presents features in isolation, when in fact any prompt could have zero, one, two, three, four, or all five of these features at once and the effects are not necessarily additive. The best-performing prompt in Table 2 is, however, of the form â-DNA-â which is expected from this analysis.
# 4.3 Effect of prompt length | 2309.10621#35 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 35 | 3.5 MINT CAN HELP DETECT FAILURE PATTERNS OF EVALUATED LLMS
Surprisingly, beyond evaluating LLMsâ multi-turn interaction ability, we find that complex multi- turn tasks (e.g., Fig. 1) in MINT can also act as a âtest suiteâ to test an LLM for unexpected behavior. We find two main categories of anomalies: (1) inability to follow formatting instructions and (2) producing unexpected outputs likely due to artifacts.
Inability to Follow Formatting Instructions. We find that some models (e.g., smaller CodeLLaMA and LLaMA, chat-bison-001) have trouble producing a parsable format as in- structed, hindering task-solving (statistics can be found in Tab. A.7). | 2309.10691#35 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 35 | 11
Neo [3] RedPaj. [7] DC-1 DC-2 DC-3 DC-4 DC-5 DC-6 LBS 7B 9.5 35.0 74.7 44.3 66.9 77.4 38.2 64.4 39.8 86.0 85.0 73.8 54.7 55.3 61.2
Table 6: Results of six dataset combination configurations of 1.3B models and our LBS-7B model details are presented in Sec. 6. Bigbench is evaluated under 3-shot using the average of multiple choice grade. Arc easy and arc challenge are evaluated using 5-shot, 25-shot, and 25-shot, respectively. All other eval- uation benchmarks are tested on 0-shot. * represents the results are averaged across multiple sub-items inside each benchmark dataset. | 2309.10818#35 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 36 | As shown in Table 5 Baichuan 2-7B-Base surpasses models such as GPT-3.5 Turbo, ChatGLM 2-6B, and LLaMA 2-7B in the field of Chinese law, second only to GPT-4. Compared to Baichuan 1-7B, Baichuan 2-7B-Base shows an improvement of nearly 10 points. In the medical field, Baichuan 2-7B-Base outperforms models like ChatGLM 2-6B and LLaMA 2-7B, showing significant improvement over Baichuan 1-7B as well.
Similarly, Baichuan 2-13B-Base surpasses models other than GPT-4 in the field of Chinese law. In the medical domain, Baichuan 2-13B- Base outperforms models such as XVERSE-13B
and LLaMA 2-13B. Compared to Baichuan 1- 13B-Base, Baichuan 2-13B-Base also exhibits remarkable improvement.
# 5.3 Math and Code
This section introduces the performance in mathematics and coding. | 2309.10305#36 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 36 | # 4.3 Effect of prompt length
Using an LLM to compare texts, Wang et al. [2023] saw an effect of prompt lengthâthe longer the text, the more positive the LLMâs assessment. We checked for similar effects in our data by modelling the LLMâs signed error as a response to prompt length. This controls for any effect of length on true relevance; if longer documents are just more (or less) likely to be relevant, then the LLM should not be penalised for reflecting this. Replicating Wang et al.âs effect would require a positive effect: that is, errors should get more positive (the LLM should overestimate more, or be more optimistic) as prompts got longer.
Controlling for prompt features, we saw no substantial correlation between prompt length and signed error. Effects varied according to prompt features, with modelled score shifting between â9 Ã 10â6 and 1 Ã 10â5 per character of prompt. This corresponds to only a shift in score of -0.05 to 0.06 at the median prompt length, which (although statistically significant) is of no practical significance given the MAEs of Table 2.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
# 4.4 Effect of paraphrasing prompts | 2309.10621#36 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 36 | Unexpected Output Likely Due to Data Artifact. We find that Vicuna models (SIFT on ShareGPT data) generate escaped underscore (â\ â) instead of underscore (â â) across all tasks, causing syn- tax errors when executing code and reducing performance. We examine ShareGPT data (2023) and find at least one escaped underscore (â\ â) artifact on 15% examples, suggesting artifacts in training data could cause this issue. We observe a similar issue with CodeLLaMA-Instruct: We find that CodeLLaMA-Instruct (34B) always ignores user-given instructions on the code generation tasks âwrap your code with <execute> tagâ and uses [PYTHON] to wrap the code (happens on 100% of code generation tasks, 0% on other tasks). Touvron et al. (2023) uses [PYTHON] as the tag to generate self-instruct data on code problems for SIFT. We suspect CodeLLaMA-Instruct models are trained and overfitted to [PYTHON] token, causing them to produce [PYTHON] regard- less of user instruction. We refer to §E.1 and §E.2 for examples and quantitative results.
3.6 CAN GPT-4 GENERATE HUMAN-LEVEL NATURAL LANGUAGE FEEDBACK? | 2309.10691#36 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 36 | Risk of random guessing score on 1.3B models. It is widely recognized that small models, such as the 1.3B variant, may struggle to achieve satisfactory predictions on specific benchmarks like MMLU. Their results could resem- ble random choices, not truly capturing the modelâs actual capabilities. To more accurately showcase a modelâs true potential and reflect the ability of different data combinations, we introduce a novel metric RRGS (risk of ran- dom guessing score) to evaluate the degree of random guessing. Since 25% in MMLU represents the baseline score for a guess, this metric evaluates the variance using average â1 distance around this base value across all sub-items. A larger variance would suggest a reduced likelihood of predictions resulting from mere chance. Given a MMLU score vector X of length N with sub-item scores s1, s2, . . . , sn, RRGS can be formulated as:
1 N RRGS = 1 ~ > Dulles â 0.25]) (2) | 2309.10818#36 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 37 | # 5.3 Math and Code
This section introduces the performance in mathematics and coding.
We use GSM8K (Cobbe et al., 2021) (4-shot) and MATH (Hendrycks et al., 2021b) (4-shot) to evaluate the mathematical ability. MATH contains 12,500 mathematical questions that are harder to be solved. To evaluate the modelâs code ability, we report the scores in HumanEval (Chen et al., 2021) (0-shot) and MBPP (Austin et al., 2021) (3-shot). ⢠HumanEval is a series of programming tasks including model language comprehension, reasoning, algorithms, and simple mathematics to evaluate the correctness of the model and measure the modelâs problem-solving ability. ⢠MBPP. It consists of a dataset of 974 Python short functions and program textual descriptions, along with test cases used to verify the correctness of their functionality. We use OpenCompass to evaluate the ability of models in math and code. As shown in Table 6, in the field of mathematics, Baichuan 2-7B- Base surpasses models like LLaMA 2-7B. In the code domain, it outperforms models of the same size such as ChatGLM 2-6B. Baichuan 2-7B-Base exhibits significant improvement compared to the Baichuan 1-7B model. | 2309.10305#37 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 37 | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
# 4.4 Effect of paraphrasing prompts
We have seen that LLM peformance varies considerably as the prompt is varied, even when the task and the input data are fixed. This raises a question: how sensitive is the LLM not just to coarse prompt features, such as asking for aspects, but to quirks of phrasing? In other words, if we rephrased âassume that you are writing a reportâ to âpretend you are collecting information for a reportâ, or to âyou are collecting reading material before writing a reportâ, would the labels change? If so, then our LLM is highly sensitive to such apparently trivial considerations. That would mean that, first, the results above are only representative of a wide range of possible performance; and second, any serious attempt to use LLMs at scale needs to explore a large and unstructured prompt space.
To test this, we took the â-DNA-â promptâthe best aboveâand generated 42 paraphrases by rewriting the text âGiven a query and a web page . . . Otherwise, mark it 0â and by rewriting the text âSplit this problem into steps: . . . Produce a JSON array of scores without providing any reasoningâ. Figure 3 gives some examples. | 2309.10621#37 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 37 | 3.6 CAN GPT-4 GENERATE HUMAN-LEVEL NATURAL LANGUAGE FEEDBACK?
We perform a human evaluation quantitatively comparing the feedback generated by GPT-4 and written by humans. Details can be found in Appendix §B. In Tab. 5, human annotators consider 91.2% of GPT-4 generated language feedback to be as helpful as, if not better than, human written
7
Preprint.
Table 3: LLMâs ability to leverage natural language feedback, measured by âfeedback between modelsâ performance with and without feedback produced by gpt-4-0613. All models are eval- uated with an interaction turn limit of k = 5. For both open- and closed-source LLMs, the best performance is bolded, and the second-best performance is underlined. | 2309.10691#37 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 37 | 1 N RRGS = 1 ~ > Dulles â 0.25]) (2)
where i is the index of sub-item in MMLU and N is the number of items of MMLU. This metric utilizes the probabilities of variance to baseline 25%, aim- ing to assess the extent to which a modelâs prediction resembles random guess- ing on the MMLU benchmark. The metric has three variations: (1) Consider only items with scores exceeding 25%, i.e., i â {positive item set}. (2) Focus solely on items with scores less than 25%, i.e., i â {negative item set}. (3) In- clude all items and sum them up. The results are shown in Table 7. Generally, a model with a higher MMLU average score will have a low risk of random
12
guessing probability.
It is also crucial to employ a broader and more diverse set of benchmarks, such as in Table 6. Additionally, for a detailed understanding, we have cata- loged the complete MMLU results for every sub-item in Table 12. This offers a lens into the knowledge assimilated by the pretrained models within each sub-domain on this comprehensive benchmark. | 2309.10818#37 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 38 | In mathematics, Baichuan 2-13B-Base surpasses all models of the same size, approaching the level of GPT-3.5 Turbo. In the code domain, Baichuan 2-13B-Base outperforms models like LLaMA 2- 13B and XVERSE-13B. Baichuan 2-13B-Base demonstrates significant improvement compared to Baichuan 1-13B-Base.
# 5.4 Multilingual
We use Flores-101 (NLLB Team, 2022; Goyal et al., 2021; Guzmán et al., 2019) to evaluate Flores-101 covers 101 multilingual ability. Its data is languages from around the world. sourced from various domains such as news, travel guides, and books. We selected the official languages of the United Nations (Arabic (ar), Chinese (zh), English (en), French (fr), Russian (ru), and Spanish (es)), as well as German (de) and Japanese (ja), as the test languages. We conducted 8-shot tests on seven subtasks in Floreshou core a at agrment hob core bore ety igrnent
safoty sore ator slety signment aly score bforesaeyaignent
hou core a at agrment safoty sore ator slety signment hob core bore ety igrnent aly score bforesaeyaignent | 2309.10305#38 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 38 | Figure 4 shows the resulting spread of label quality, measured again as Cohenâs ð
against the labels from TREC assessors and across our stratified sample of 3000 documents. Each paraphrase is represented by one dark line, showing the mean ð
and a 95% confidence interval derived from 20 bootstraps over documents. There is a large range, from mean ð
= 0.50 (moderate agreement) to mean ð
= 0.72 (substantial agreement, and better than the reference values cited above [Agarwal et al. 2019; Castillo et al. 2006; Cormack et al. 1998; Faggioli et al. 2023; Hersh et al. 1994]). The empirical 95% confidence interval, over all bootstraps and all paraphrases, is 0.50â0.71 (plotted at the left-hand edge of Figure 4).
This is a wide range from a single prompt design, and from Figure 3 it is not at all apparent which versions would score higher or why. The outsized effect of simple paraphrases has been observed in other domains as well [Zhang et al. 2022; Zhou et al. 2022]. | 2309.10621#38 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 38 | Open-source LLM 7B Base no feedback w/ GPT-4 feedback âfeedback, gpt-4 â0.0 4.8 +4.8 18.7 59.7 +41.0 â0.0 0.0 +0.0 4.3 16.2 +11.9 SIFT no feedback w/ GPT-4 feedback âfeedback, gpt-4 7.9 17.1 +9.2 17.2 62.7 +45.5 2.2 10.3 +8.1 8.7 25.9 +17.2 CodeLLaMA 13B Base SIFT no feedback w/ GPT-4 feedback âfeedback, gpt-4 no feedback w/ GPT-4 feedback âfeedback, gpt-4 8.5 15.8 +7.3 4.8 10.1 +5.4 4.4 27.9 +17.9 +23.5 â 2.2 50.0 59.0 14.7 +9.0 +12.5 56.0 73.9 18.4 31.9 +13.5 14.5 22.4 +7.8 34B Base SIFT no feedback w/ GPT-4 feedback âfeedback, gpt-4 no feedback w/ GPT-4 | 2309.10691#38 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 38 | DC-1 DC-2 DC-3 DC-4 DC-5 DC-6 0.262 0.27 0.964 0.963 0.973 0.974 0.968 0.967 MMLU RRGSpos RRGSneg RRGSall 0.257 0.964 0.973 0.968 0.256 0.968 0.975 0.971 0.257 0.965 0.974 0.969 0.260 0.970 0.969 0.970
RRGS,, | 0.968 0.968 0.971 0.969 0.970 | 0.967 7: Evlauation of random guessing probability on sub-items of MMLU. Training Loss 3.0 Dc-1 @ Dc-3 @ 28 pea @ bcs @ 2.0 DC-6 e 0 20k 40k 60k 80k 100k 120k 140k
# Table
# 5.3 Training Loss
Figure 3: Illustration of training loss curves. DC-2âs curve closely resembles those of DC-3 and 5, so it has been excluded from the figure for clarity. | 2309.10818#38 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 39 | hou core a at agrment safoty sore ator slety signment hob core bore ety igrnent aly score bforesaeyaignent
Figure 6: Helpfulness and harmlessness before and after safety alignment of Baichuan 2. The x-axis shows the metric before safety alignment and the y-axis shows the result after. We see that helpfulness remains largely unchanged after this procedure, while harmlessness improved substantially (more mass in upper triangle) with safety efforts.
101 , including zh-en, zh-fr, zh-es, zh-ar, zh-ru, zh-ja and zh-de. The evaluation is conducted with OpenCompass.
In the multilingual domain, as shown in Table 7, Baichuan 2-7B-Base surpasses all models of the same size in all seven tasks and shows significant improvement compared to Baichuan 1-7B.
Baichuan 2-13B-Base outperforms models of the same size in four out of the seven tasks. In the zh-en and zh-ja tasks, it surpasses GPT3.5 Turbo and reaches the level of GPT-4. Compared to Baichuan 1-13B-Base, Baichuan 2-13B-Base exhibits significant improvement in the zh-ar, zh- ru, and zh-ja tasks. | 2309.10305#39 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 39 | This leads to two observations. First, the measured performance of any promptâincluding those in Table 2âshould be taken as a single sample from a wider range of potential performance. Small tweaks to the wording could result in noticeably different performance, even without any changes to the promptsâ overall design. Second, it is prudent to fix an overall design, and then explore rephrasing and other options. Because it is not clear what leads to better or worse performance, taking paraphrases is a reasonable approach, but we note work by Pryzant et al. [2023], Yang et al. [2023], Zhou et al. [2022], and others that suggests alternatives for fine-tuning prompts.
# 4.5 Effect of document selection | 2309.10621#39 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 39 | +7.8 34B Base SIFT no feedback w/ GPT-4 feedback âfeedback, gpt-4 no feedback w/ GPT-4 feedback âfeedback, gpt-4 17.4 30.4 +13.0 14.9 20.2 +5.4 18.4 30.1 +20.9 +11.8 ââ 2.2 3.7 +1.5 63.4 84.3 37.3 67.9 +30.6 28.2 42.7 +14.5 17.1 27.3 +10.2 7B Base RLHF no feedback w/ GPT-4 feedback âfeedback, gpt-4 no feedback w/ GPT-4 feedback âfeedback, gpt-4 2.9 4.1 +1.3 13.6 14.6 +1.0 35.8 46.3 +10.5 â0.0 2.2 +2.2 0.0 8.1 +8.1 0.0 2.9 +2.9 9.7 14.7 +4.9 7.3 9.0 +1.7 LLaMA-2 13B Base no feedback w/ GPT-4 feedback âfeedback, gpt-4 3.5 10.8 +7.3 | 2309.10691#39 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 39 | Figure 3: Illustration of training loss curves. DC-2âs curve closely resembles those of DC-3 and 5, so it has been excluded from the figure for clarity.
Fig. 3 presents the training loss curves for various data combinations, from which several insights can be observed: 1) While DC-6 demonstrated the high- est average accuracy in our quantitative evaluations, its training loss was also the most substantial. This suggests that a lower training loss doesnât necessar- ily correlate directly with superior model performance. 2) DC-4, with a con- siderable portion of its data coming from code domain, exhibited the lowest training loss. This implies that as the amount of code in training increases, the training loss diminishes. 3) The training loss values for other combinations appeared to be relatively consistent with one another.
13
# 6 Application: Large Batch-size Training on 7B
# 7B Training Data Combination | 2309.10818#39 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 40 | Although GPT-4 still dominates in the field of multilingualism, open-source models are catching up closely. In zh-en tasks, Baichuan 2-13B-Base has slightly surpassed GPT-4.
# 5.5 Safety Evaluations
In Sec. 4, we describe the efforts made to improve the safety of Baichuan 2. However, some prior work indicates that helpfulness and harmlessness are two sides of a seesaw - when harmlessness increases, helpfulness could lead to a bit decrease (Bai et al., 2022a). So we evaluate these two factors before and after safety alignments.
Figure 6 shows the helpfulness and harmlessness before and after the safety alignment of Baichuan 2. We can see that our safety alignment process did not hurt the helpfulness while significantly improving the harmlessness.
Then we evaluate the safety of our pre-trained models using the Toxigen (Hartvigsen et al., 2022) dataset. Same as LLaMA 2, we use the cleaned | 2309.10305#40 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 40 | # 4.5 Effect of document selection
Given the different performance of the different prompts, and indeed the different paraphrases, it is tempting to choose the best-performing variant and commit to using it for future labelling. This of course carries a risk: performance on these topics and documents might not predict performance on other, unseen, topics and documents. The conventional guard against this is a train:test split. Here, we can interpret âtrainingâ as the choice of prompt, and we used repeated splits to understand the risk of choosing the best variant. For each of 1000 iterations, we randomly split our 3000 TREC and LLM labels into two sets of 1500 documents. We measured ð
for each prompt (or paraphrase) over the first 1500, noted the best performer (highest ð
), and measured again on the second 1500.
The results were consistent. When scoring prompts (Table 2), in all 1000 iterations the best-performing prompt on the first split also beat the baseline â-----â on the second split. That means that, starting from the baseline prompt, if we chose an alternative because it was the best improvement on one set of documents, we can be almost certain that prompt would still be an improvement on another set. In 829/1000 first splits, the best-performing variant was -DNA-,
Large language models can accurately predict searcher preferences | 2309.10621#40 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 40 | LLaMA-2 13B Base no feedback w/ GPT-4 feedback âfeedback, gpt-4 3.5 10.8 +7.3 5.2 15.4 +10.5 +10.3 50.0 60.5 14.5 23.2 +8.7 RLHF no feedback w/ GPT-4 feedback âfeedback, gpt-4 19.6 24.1 +4.4 3.7 9.7 +6.0 2.2 10.3 +8.1 11.9 17.6 +5.6 70B Base no feedback w/ GPT-4 feedback âfeedback, gpt-4 18.7 22.5 +3.8 12.5 27.9 +14.2 +15.4 59.0 73.1 26.4 35.3 +8.9 RLHF no feedback w/ GPT-4 feedback âfeedback, gpt-4 20.2 23.1 +2.9 8.8 19.9 +20.1 +11.0 21.6 41.8 17.9 26.6 +8.7 Lemur-v1 70B Base no feedback w/ GPT-4 feedback âfeedback, gpt-4 16.1 20.9 +4.8 15.4 | 2309.10691#40 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 40 | 13
# 6 Application: Large Batch-size Training on 7B
# 7B Training Data Combination
Our 7B large batch size (LBS) training dataset is primarily based on Slimpa- jama, however, to obtain a sufficient proportion of web text, we have incor- porated additional web data from the Commoncrawl corpus in RedPajama. We have also adjusted the proportions of various data sources in line with our 1.3B model training. For instance, we elevate the sampling frequency of Github and Wikipedia and increase the diversity of data sources by adding S2orc [25] and Stack-Markdown [21] following [38], as detailed in Table 8. Itâs crucial to understand that our primary focus is not solely on achieving the best perfor- mance. Instead, we place a higher emphasis on optimizing data combinations and ensuring the convergence of training large language models with large batch sizes. Consequently, we continue to utilize the SlimPajama/RedPajama Commoncrawl instead of higher-quality RefinedWeb. | 2309.10818#40 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 41 | GPT-4 GPT-3.5 Turbo 59.32 42.31 77.16 61.17 80.28 53.81 74.58 52.92 72.51 56.25 7B LLaMA-7B LLaMA2-7B MPT-7B Falcon-7B ChatGLM2-6B Baichuan 1-7B Baichuan 2-7B-Base 27.45 29.20 27.45 23.66 40.76 34.64 44.46 33.34 36.75 26.67 25.33 44.54 42.37 56.39 24.12 27.49 16.97 21.29 26.24 27.42 32.68 21.72 24.78 19.79 18.07 45.53 39.46 54.93 27.45 37.93 31.96 33.88 30.22 31.39 41.73 13B LLaMA-13B LLaMA 2-13B Vicuna-13B Chinese-Alpaca-Plus-13B XVERSE-13B Baichuan 1-13B-Base Baichuan 2-13B-Base 27.54 34.08 28.38 35.32 46.42 41.34 47.40 35.14 47.42 40.99 46.31 58.08 51.77 59.33 28.83 35.04 | 2309.10305#41 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 41 | Large language models can accurately predict searcher preferences
Original Given a query and a web page, you must provide a score on an integer scale of 0 to 2 with the following meanings: 2 = highly relevant, very helpful for this query 1 = relevant, may be partly helpful but might contain other irrelevant content 0 = not relevant, should never be shown for this query Assume that you are writing a report on the subject of the topic. If you would use any of the information contained in the web page in such a report, mark it 1. If the web page is primarily about the topic, or contains vital information about the topic, mark it 2. Otherwise, mark it 0. . . . Split this problem into steps: Consider the underlying intent of the search. Measure how well the content matches a likely intent of the query (M). Measure how trustworthy the web page is (T). Consider the aspects above and the relative importance of each, and decide on a final score (O). Produce a JSON dictionary of scores without providing any reasoning.
ð
= 0.64 | 2309.10621#41 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 41 | 70B Base no feedback w/ GPT-4 feedback âfeedback, gpt-4 16.1 20.9 +4.8 15.4 61.2 70.2 27.9 +9.0 +12.5 26.3 33.8 +7.5 Vicuna-v1.5 SIFT 7B SIFT 13B SIFT no feedback w/ GPT-4 feedback âfeedback, gpt-4 no feedback w/ GPT-4 feedback âfeedback, gpt-4 no feedback w/ GPT-4 feedback âfeedback, gpt-4 31.6 32.6 +0.9 â 10.1 9.8 â0.3 â 11.1 16.5 +5.4 27.2 59.7 68.7 44.9 +9.0 +17.6 â 2.2 6.6 +4.4 â 2.2 1.5 â0.7 29.1 64.9 +35.8 â 8.2 5.2 â3.0 37.0 43.7 +6.7 12.6 21.7 +9.0 8.4 10.4 +2.1 Closed-source LLM chat-bison-001 - - no feedback w/ GPT-4 feedback âfeedback, | 2309.10691#41 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10621 | 42 | ð
= 0.64
Paraphrase 1 Rate each web page for how well it matches the query, using these numbers: 0 = no match, 1 = some match, 2 = great match. Think of writing a report on the query topic. A web page gets 2 if it is mainly about the topic or has important information for the report. A web page gets 1 if it has some information for the report, but also other stuff. A web page gets 0 if it has nothing to do with the topic or the report. . . . To score this problem, follow these steps: - Think about what the search query is trying to achieve. - Assign a score from 0 to 2 for how well the content addresses the queryâs goal (M). Higher scores mean better matches. - Assign a score from 0 to 2 for how reliable the web page is (T). Higher scores mean more trustworthiness. - Combine the scores for M and T, and give more weight to the more important aspect. Assign a final score from 0 to 2 (O). Higher scores mean better overall quality. - Write a JSON dictionary with the keys M, T, and O, and their corresponding scores. Do not explain your scores.
ð
= 0.72 | 2309.10621#42 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 42 | 8.4 10.4 +2.1 Closed-source LLM chat-bison-001 - - no feedback w/ GPT-4 feedback âfeedback, gpt-4 â14.2 25.0 +10.8 29.9 47.0 +17.2 â0.0 6.6 +6.6 14.5 25.8 +11.3 claude-2 - - no feedback w/ GPT-4 feedback âfeedback, gpt-4 52.2 55.1 +2.8 36.8 47.1 +26.9 +10.3 14.2 41.0 39.9 50.0 +10.1 claude-instant-1 - - no feedback w/ GPT-4 feedback âfeedback, gpt-4 50.0 54.4 +4.4 35.3 47.0 53.0 47.1 +6.0 +11.8 45.9 52.4 +6.5 gpt-3.5-turbo-0613 - - no feedback w/ GPT-4 feedback âfeedback, gpt-4 36.7 50.3 +13.6 41.8 66.4 +24.6 29.4 39.0 +9.6 36.2 51.4 +15.2 | 2309.10691#42 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 42 | Table 8: Data combination of 7B model training in large batch size style.
# 7B Model Training Configurations
Architecture. For the 7B model training, we adopt MPT architecture [38], the max sequence length is 2,048. We use Triton [35] with Flash Attention [8] as the self-attention implementation. Alibi is enabled to make model more flexible for input length extrapolation. The modelâs total number of parameters is 6.7B. Tokenizer. The tokenizer used for 7B training is adapted GPT-NeoX-20b. Fol- lowing [38], the modelâs vocabulary size is adjusted to 50,432 for improved mfu and leaving a few tokens available that can be used in subsequent training. Optimizer. We employ the AdamW optimizer to train our models, adopting these specific hyper-parameters: β1 set at 0.9 and β2 at 0.95. We adopt a learn- ing rate schedule that traces a cosine pattern, concluding with a learning rate that is 10% of its maximum value. Along with this, we use a multi-stage weight
14 | 2309.10818#42 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 43 | ð
= 0.72
Paraphrase 2 To rate a web page for a query, use 0, 1, or 2. Use 0 if the page has nothing to do with the query. Use 1 if the page has some useful information, but also other stuff. Use 2 if the page is mainly about the query or has important information. . . . For this problem, you need to do the following: - Think about what the searcher wants to find out. - Rate how well the content answers the query, from 0 (poor) to 2 (excellent) (M). - Rate how reliable the web page is, from 0 (low) to 2 (high) (T). - Based on the ratings and their importance, give a final score from 0 to 2 (O). - Write a JSON dictionary of the scores without explaining them.
ð
= 0.50
Fig. 3. Examples of paraphrased prompts, based on prompt format â-DNA-â (description, narrative, and aspects). Each paraphrase was run with each of our 3000 sampled documents, to gauge the modelâs sensitivity to changes in the prompt text.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
1.00 075 Original prompt -DNA- (best from Table, P) 050 k against TREC assessors Prompt R---M (worst from Table 2) 0.00 | 2309.10621#43 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10818 | 43 | 14
decay scheduler as described in Sec. 6.4, cap the gradient with a clipping value of 1.0, and use a warmup spanning 2,000 steps. System and platform. For our 7B model training with a large batch size, we use 232 NVIDIA A100 GPUs (80G). We employ llm-foundry [37] as the training platform. We use FSDP with activation checkpointing enabled to save memory consumption. We also use the automatic mixed precision of bf16 in training.
# 6.3 Fast Training with Large Batch-size | 2309.10818#43 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 44 | GPT-4 GPT-3.5 Turbo GSM8K MATH HumanEval MBPP 63.60 61.40 89.99 57.77 40.20 13.96 69.51 52.44 LLaMA-7B LLaMA 2-7B MPT-7B Falcon-7B ChatGLM 2-6B Baichuan 1-7B Baichuan 2-7B-Base 9.78 16.22 8.64 5.46 28.89 9.17 24.49 3.02 3.24 2.90 1.68 6.40 2.54 5.58 11.59 12.80 14.02 - 9.15 9.20 18.29 14.00 14.80 23.40 10.20 9.00 6.60 24.20 LLaMA-13B LLaMA 2-13B Vicuna-13B Chinese-Alpaca-Plus-13B XVERSE-13B Baichuan 1-13B-Base Baichuan 2-13B-Base 20.55 28.89 28.13 11.98 18.20 26.76 52.77 3.68 4.96 4.36 2.50 2.18 4.84 10.08 15.24 15.24 16.46 16.46 15.85 11.59 17.07 21.40 27.00 | 2309.10305#44 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 44 | 1.00 075 Original prompt -DNA- (best from Table, P) 050 k against TREC assessors Prompt R---M (worst from Table 2) 0.00
Fig. 4. Variation in Cohenâs ð
between LLM labels and human labels, over a stratified sample of 3000 documents from TREC-Robust. Small changes in the wording of the prompt, while keeping the structure the same, lead to substantial changes in ð
. Each vertical line is one paraphrased prompt, with empirical 95% CI from 20 bootstraps over documents. Grey interval at left is the empirical 95% CI over all bootstraps and paraphrases.
which is again consistent with the above but also suggests the choice is reliable. (The next best performer was --NA-, 139 times out of 1000; of course in practice these two prompts are very similar.)
Looking at the 42 paraphrases of Figure 4, in 989/1000 iterations the best-performing paraphrase on the first 1500 documents still beat the initial -DNA- prompt on the second 1500. The best-performing paraphrase was again consistent: variant #13 had the highest ð
on the first split in 838/1000 iterations. This is marginally less consistent than the choice of overall prompt design. | 2309.10621#44 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 44 | * Evaluated LLM failed to produce parsable output as instructed in some cases (§2.1). See §3.5 and Tab. A.7 for details. â We identified potential undesired artifacts in its training data, which hurt its performance. See §3.5 for details.
8
# Preprint.
Table 4: LLMsâ ability to provide feedback, mea- sured by âfeedback with a fixed evaluated LLM (GPT-3.5). We also report SR5 differences be- tween the feedback-provider and evaluated LLM.
_
Table 5: Human Evaluation of GPT-4 Gen- erated Feedback against human written feed- back, measuring helpfulness and human-like.
Feedback-provider LLM gpt-4-0613 claude-instant-1 gpt-3.5-turbo-16k-0613 CodeLlama-34b (Base) Llama-2-70b (Base) Llama-2-70b-chat (RLHF) CodeLlama-34b-Instruct (SIFT) SR5 Difference âfeedback +15.2 +33.3 +1.5 +9.7 â10.4 +4.1 +2.4 â8.0 â0.5 â9.7 â14.0 â18.3 +3.2 â19.1 | 2309.10691#44 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 44 | Large batch training allows a larger learning rate, leading to a faster conver- gence of large models. Also, utilizing a larger batch size can optimize hardware resource usage to make training procedures more efficient. Additionally, fewer batches are required, which further accelerates the training process. As shown in Table 9, our large batch training scheme achieves much higher throughput and mfu than LLaMA [36] and MPT [38] with fewer total training GPU hours. Overall, in a convex optimization framework, leveraging a larger portion of the dataset typically leads to enhanced results. However, for most large deep models that involve non-convex optimizations, the precise nature of the loss landscape remains elusive, making the scenario more intricate. Many prior works [17, 19] have noticed that training with larger batches often results in overfitting compared to those using smaller batch sizes for the same network. When utilizing large batch training, there is a propensity for the model to be- come stuck or even gravitate towards potential saddle points within the loss landscape. While large batch training methods often focus on the nearest rel- ative minima they encounter, networks trained with smaller batches usually navigate the loss landscape more thoroughly before committing to an optimal minimum. The minima reached through large batch training can be distinctly different from those achieved with smaller batch training methods. In the fol- lowing, we introduce an approach to mitigate overfitting when training large language models in a large batch-size scheme. | 2309.10818#44 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 45 | These observations suggest that while performance is variable, there is little chance of regret. That is, if we start with a baseline prompt and generate variantsâe.g., by adding features or by paraphrasingâand choose to switch to the best variant, that is a safe choice. If we choose the best variant on some set of documents, performance on unseen documents will almost never turn out to be worse than the baseline.
# 4.6 Measuring query difficulty and run effectiveness
Document labels themselves are not the goal of most evaluations. Instead, we typically map these labels to numeric values (0 and 1 for binary labels) and then use a metric such as average precision to aggregate to scores for each query and run. The scores for queries let us investigate instances where we do badly, meaning where there is scope for improvement; the scores for runs let us choose which combination of algorithms and parameters performs the best overall.
Accordingly, another way to judge a labelling scheme is by whether (under some metric) it gives the same ranking of queries or runs. If we swapped labelling schemes, would we still identify the same queries as hard? Would we still identify the same runs as top performers?
Large language models can accurately predict searcher preferences | 2309.10621#45 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 45 | Percentage (%) Which feedback is more Helpful Human-Like Both are equally GPT-4 feedback Human feedback 36.3 54.9 8.8 69.9 22.1 8.0
feedback. Itâs also hard for humans to distinguish GPT-4 generated feedback from human feedback (human-like) in 92% of the cases. We also compare GPT-4 generated and human-written feedback by asking gpt-3.5-turbo-0613 to continue problem-solving with either a turn of (1) human language feedback or (2) GPT-4 feedback. Results show that human feedback and GPT-4 feedback lead to similar model performance SRfeedback
# 4 RELATED WORK
4.1 LLM IN INTERACTION | 2309.10691#45 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 45 | model LLaMA-7B MPT-7B LBS-7B (ours) batch size 4M 4M 14M # GPUs (A100-80G) â 232 232 throughput mfu â 3,310 3,626 â 0.4575 0.5011 GPU-hours 82,432 84.351 76,999
Table 9: Training speed of throughput (tokens per sec on each GPU), model FLOPs utilization (mfu) [5] and total GPU-hours (per trillion training tokens).
# 6.4 Progressive Training on Weight Decay
Prior work [24] observed that dropout operation is utilized only in the early stages of training and is deactivated in subsequent phases. Models that incor- porate this early dropout strategy tend to exhibit reduced final training loss compared to models that do not use dropout. In contrast to this, our approach
15
wu time/token 200G 400G 600G 800G 17
Figure 4: Loss curve of our LBS-7B training. | 2309.10818#45 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 46 | Large language models can accurately predict searcher preferences
P@10 RBP@100, ð = 0.6 MAP@100 Hardest queries RBO, ð = 0.9 0.40 0.42 0.48 0.04 Best runs 0.79 0.63 0.50 0.03 0.97 0.91 0.58 0.21
Best groups RBO, ð = 0.7 RBO, ð = 0.7
# (Random permutation)
Table 4. Consistency of rankings on LLM labels compared to human labels, replicating all qrels in TREC-Robust to a depth of 100. Queries, runs, and groups were scored with each of three metrics, based on each of two sets of labels. Higher numbers mean the rankings based on LLM labels are more like those based on human labels. We report normalised RBO, ranging from zero (LLMs and humans put queries/runs/groups in opposite order) to one (LLMs and humans give scores putting queries/runs/groups in the same order). | 2309.10621#46 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 46 | # 4 RELATED WORK
4.1 LLM IN INTERACTION
Interact with Users. LLMs have demonstrated extensive potential in seamless interaction with human users and in assimilating real-time human feedback during inference processes (Fernandes et al., 2023). According to recent studies, this collaborative synergy between humans and LLMs has been explored across various domains and applications, including sentences editing (Reid & Neubig, 2022; Schick et al., 2023c), code generation (Nijkamp et al., 2023), iterative output refine- ment (Saunders et al., 2022), and creative writing (Lee et al., 2022a; Shu et al., 2023; Wang et al., 2023b), generative information-seeking (Kamalloo et al., 2023), and even theorem proving (Yang et al., 2023b). The partnership between users and LLMs continues to redefine possibilities across diverse research areas, signaling promising advancements in the near future. | 2309.10691#46 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 46 | 15
wu time/token 200G 400G 600G 800G 17
Figure 4: Loss curve of our LBS-7B training.
emphasizes the role of weight decay during large model training. We intro- duce a novel training strategy for large language models, wherein the training process is segmented into various stages. Within each stage, a distinct weight decay is applied to the model to serve specific objectives. Weâve termed this approach Progressive Training on Weight Decay (PTWD). Owing to this method- ology, our model, even when trained with a large batch size and extremely small iterations, achieves smooth convergence. As illustrated in Fig. 4, our training strategy consists of three distinct phases. Initially, we negate weight decay by setting it to zero and allow the model to train until full convergence is achieved. It usually can reach a lower loss level within this stage compared to using weight decay, even if it slightly overfits. Following this, in the sec- ond phase, we introduce a substantial weight decay, with a value of 0.5 in our experiments, to suppress the overfitting. Once the loss values stabilize, we transition to the third phase, wherein a standard weight decay of 0.1 is imple- mented, a value consistent with many other LLMs training. Intriguing, each phase spontaneously converges to roughly 1/3 of the total training budget, ensuring effective allocation of training budget throughout the process. | 2309.10818#46 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 47 | GPT-4 GPT-3.5 Turbo 20.43 17.59 1.82 LLaMA-7B LLaMA 2-7B MPT-7B Falcon-7B ChatGLM 2-6B Baichuan 1-7B Baichuan 2-7B-Base 17.27 12.02 9.54 25.76 15.14 11.92 8.96 20.77 9.53 9.28 22.13 15.67 22.28 7.77 9.42 25.07 16.51 12.72 27.27 20.87 16.17 0.00 0.79 0.10 0.11 0.64 0.41 1.39 4.47 4.99 3.54 1.35 1.78 6.66 11.21 1.41 2.20 2.91 0.41 0.26 2.24 3.11 8.73 10.15 6.54 6.41 4.61 9.86 12.76 7.63 10.14 7.48 7.91 6.68 10.50 13.25 LLaMA-13B 21.75 16.16 13.29 25.44 19.25 17.49 LLaMA 2-13B Vicuna-13B 22.63 18.04 14.67 Chinese-Alpaca-Plus-13B 22.53 13.82 | 2309.10305#47 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 47 | In Table 4 we report the consistency of query and run rankings as we switch from human-assigned to LLM-assigned labels. In each case we score all the queries with one metricâe.g. P@10âbased on TRECâs human labels, and score them again based on our LLM labels. (We collected additional labels so that every document retrieved to depth 100, in every run, was labelled with prompt -DNA- except those which were never labelled at TREC. For consistency with TREC, we assume these unlabelled documents are not relevant.) This gives two rankings of queries. The consistency between these rankings is measured with RBO, normalised so that a score of 0 represents an inverted order and a score of 1 represents an identical ordering. We assume an experimenter would be willing to look at the worst ten queries, so set ð = 0.9. To help interpret the figures we also report the RBO scores for random permutations, i.e. the consistency between the TREC ordering and a random re-ordering of the same queries. | 2309.10621#47 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 47 | Interact with Tools. Engaging with external tools allows LLMs can lead to more accurate and reliable outputs (Peng et al., 2023; Gou et al., 2023; Qin et al., 2023a). LLMs can be connected with real-world Application Programming Interfaces (APIs), enabling them to actively engage with diverse external tools (Qin et al., 2023b; Parisi et al., 2022; Schick et al., 2023a; Tang et al., 2023; Patil et al., 2023; Song et al., 2023; Hao et al., 2023). For example, LLMs can connect with (1) the Internet to obtain latest information (Nakano et al., 2021; Shuster et al., 2022; Paranjape et al., 2023; Liu et al., 2023b); (2) the program interpreter to run the generated code (Chen et al., 2022; Gao et al., 2023; Drori et al., 2022; Pan et al., 2023; Wang et al., 2023a); (3) multimodal perceiver to obtain the information beyond the language modality (Huang et al., 2023a; Lu et al., 2023); (4) physical simulator to better understand the physical law (Liu et al., 2023a).
4.2 EVALUATING INTERACTION | 2309.10691#47 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 47 | # 6.5 Results of Pre-training and Instruction Tuning
The results from our pretraining and subsequent instruction tuning on ShareGPT dataset are presented in Table 10. Notably, after instruction tuning, there is a significant enhancement in MMLU and TruthfulQA metrics. In contrast, the performance on ARC and HellaSwag has a slight decrease. On the whole, the average accuracy witnessed a substantial boost following instruction tuning. More evaluation results on the pretrained LBS model are provided in Table 6.
16
Model Ours-LBS-7B-Base Ours-LBS-7B-Instruct Average ARC HellaSwag MMLU TruthfulQA 44.1 46.4 44.3 43.5 69.8 68.0 26.1 32.1 36.1 42.1
Table 10: Results of our large batch-size (LBS) trained 7B models following Huggingface Leaderboard Evaluation [12] using Harness [14].
# 7 Related Work
# 7.1 RedPajama, SlimPajama and Others. | 2309.10818#47 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 48 | The exercise is repeated for all 110 runs, assuming we want to find the best three or four runs (ð = 0.7). Since runs from the same group are likely very similar, we also repeat the exercise for the best run for each groupâthis simulates choosing the best approach (or perhaps vendor), rather than the best parameter settings. Again we assume we want to find the best three or four for further examination. | 2309.10621#48 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 48 | 4.2 EVALUATING INTERACTION
Existing work on interaction evaluation mostly focuses on a specific task or dimension, like task completion (Liu et al., 2023c), code generation (Yang et al., 2023a), human-LLM collaborative task solving (Lee et al., 2022b; Huang et al., 2023b; Fu et al., 2023), tool manipulation (Tang et al., 2023), and web nevigation (Zhou et al., 2023; Deng et al., 2023a). That is, they solely focus on interacting with either the environment or humans, often on a specific task, overlooking the funda- mental importance of both elements in LLM interaction. Different from prior work, MINT covers a range of diverse tasks and is designed to measure the multi-turn interaction capabilities of LLMs with both tools and user feedback that are more aligned with real-world applications.
# 5 CONCLUSION
In this work, we present MINT, an evaluation benchmark designed to evaluate LLMâs task-solving ability in multi-turn interaction by using tools and leveraging natural language feedback, which we
9
Preprint.
simulate using GPT-4. We hope MINT can serve as a helpful resource to help track progress and incentivize future research in improving LLMâs multi-turn task-solving capabilities. We refer to §A for a discussion of limitations and future work. | 2309.10691#48 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 48 | RedPajama [7] aims to develop open-source large language models and be- gins by replicating the LLaMA training dataset [36], which boasts over 1.2 tril- lion tokens. This collaborative effort involves entities such as Together, Onto- cord.ai, ETH DS3Lab, Stanford CRFM, Hazy Research, and the MILA Qu´ebec AI Institute. SlimPajama [33] stands as the highly deduplicated, multi-source, open-source dataset tailored for training large language models. This dataset emerged by refining and eliminating duplicates from the whole 1.2T token RedPajama dataset. Through meticulous filtering of subpar data and repeti- tive content, it reduced the dataset size by 49.6%, scaling it down from 1.2T to 627B tokens. SlimPajama provides superior quality and computational ef- ficiency for training tasks than the original RedPajama dataset. Other efforts also have been made in this direction to construct diverse datasets, such as Pile [13]. It is an English text corpus of 825 GiB, which is designed for the train- ing of large-scale language | 2309.10818#48 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 49 | Table 7: The result of Baichuan 2 compared with other models on multilingual field.
version from the SafeNLP project8, distinguishing neutral and hate types for the 13 minority groups, forming a 6-shot dataset consistent with the original Toxigen prompt format. Our decoding parameters use temperature 0.1 and top-p 0.9 nucleus sampling.
We use the fine-tuned HateBert version optimized in the Toxigen (Hartvigsen et al., 2022) for model evaluation. Table 8 shows that compared to LLaMA 2, the Baichuan 2-7B and Baichuan 2-13B model has some safety advantages.
To ensure comprehensive coverage within each category, We ask human annotators to generate 1,400 data samples. This was further expanded through self-instruction and cleaned by humans for fluency, resulting in 70,000 total samples with 10,000 per category. Examples of those safety prompts and principles are shown in the Appendix D.
We use those samples to evaluate different models and the result is shown in Table 9. We can see that Baichuan 2 is on par or outperforms other chat models in our safety evaluations.
Model Toxigen â Baichuan 2-13B Baichuan 2-7B LLaMA 2-7B LLaMA 2-13B 11.48 11.72 12.28 13.24
# Intermediate Checkpoints | 2309.10305#49 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 49 | The consistency of rankings, in all three cases, depends on the metric being used: ordering by MAP is more consistent for queries, and ordering by average P@10 is more consistent for runs and groups. Group-level rankings are more consistent than runs or queries, no matter the metric. It is harder to be consistent when ranking 250 queries than when ranking 110 runs or 14 groups, and small perturbations make a larger difference in ranking since many queries have similar scores. Nonetheless we see that for any problem and choice of metric, labels from LLMs lead to overall rankings which are at least similar to those from human labels, and our imagined experimenters would make similar choices. For example, under all metrics the top three runs are the same; the top five groups are consistent under P@10, the top three under RBP@100, and three of the top four under MAP@100. The worst-performing query is the same under TREC or LLM labels for P@10 and RBP@100, and two of the top three are the same under MAP@100. | 2309.10621#49 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 49 | # REFERENCES
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
# https://www.googlecloudcommunity.com/gc/AI-ML/
https://www.googlecloudcommunity.com/gc/AI-ML/ Bard API. URL Google-Bard-API/m-p/538517/.
# ChatGPT Plugins. URL https://openai.com/blog/chatgpt-plugins.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. CoRR, abs/2211.12588, 2022. doi: 10.48550/arXiv.2211.12588. URL https://doi.org/10. 48550/arXiv.2211.12588. | 2309.10691#49 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 49 | construct diverse datasets, such as Pile [13]. It is an English text corpus of 825 GiB, which is designed for the train- ing of large-scale language models with increased training dataset diversity to improve general cross-domain knowledge and downstream generalization ca- pability. It contains a combination of 22 distinct, high-quality subsets. These subsets incorporate both pre-existing and freshly curated data, with a signifi- cant portion sourced from scholarly or professional domains. | 2309.10818#49 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 50 | # Intermediate Checkpoints
We will also release the intermediate checkpoints of 7B models, from 220 billion tokens checkpoint to 2,640 billion tokens checkpoint, which is the final output of Baichuan 2-7B-Base. We examine their performance on several benchmarks and the result is shown in Figure 7.
Table 8: Toxigen results of Baichuan 2 foundation models compared with LLaMA 2.
Inspired by BeaverTails Ji et al. (2023)9, we constructed the Baichuan Harmless Evaluation Dataset safety (BHED), covering 7 major categories of bias/discrimination, insults/profanity, illegal/unethical content, physical health, mental health, financial privacy, and sensitive topics to evaluate the safety of our chat models.
As shown in the figure, Baichuan 2 demonstrates consistent improvement as training proceeds. Even after 2.6 trillion tokens, there appears to be ample room for further gains. This aligns with previous work on scaling LLMs indicating that data size is a critical factor (Hoffmann et al., 2022). In the Appendix C, we provide more detailed training dynamics for both the 7B and 13B models.
# 6 Related Work
8https://github.com/microsoft/SafeNLP/ tree/main
9https://github.com/PKU-Alignment/ beavertails | 2309.10305#50 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 50 | Of course perfect agreement is unlikely even with humans labelling. By way of comparison, Voorhees [1998] reports ð = 0.94 across runs, using labels from different assessors. This is on a different data set, with correspondingly different judgements (and only 33 runs), but give a rough upper bound for how consistent runs could ever be. Faggioli et al. [2023] demonstrate ð from 0.76 to 0.86 on TREC Deep Learning data, again under slightly different circumstances (notably, shorter documents and fewer runs). We see ð from 0.77 (MAP@100) to 0.86 (P@10) for our 110 runs with full documents. Given the ð
and AUC figures in Table 2, this is at least promising and plausibly as good as most human labellers.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
Relative accuracy Latency Relative throughput Relative cost à 1/100 à 1/15 à 1 à 10 à 8 à 5 à 1 à 1/20 Employees Best crowd Typical crowd LLM (GPT-4) +24% +19% â +28% | 2309.10621#50 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 50 | Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. Theoremqa: A theorem-driven question answering dataset. arXiv preprint arXiv:2305.12524, 2023.
Claude API. URL https://docs.anthropic.com/claude/reference/ getting-started-with-the-api.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. | 2309.10691#50 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 50 | # 7.2 Data Processing and Optimization Approaches
There have been several advancements in data processing and optimization. The seminal method of importance sampling [20] stands out as a Monte Carlo approach designed to evaluate attributes of a particular distribution, even when the samples are drawn from a distribution that differs from the one under ex- ploration. SlimPajamaâs deduplication mechanism is an adaptation of impor- tance sampling, incorporating a heuristic that values unique data points. Re- cently, several data selection frameworks [18, 15, 34, 40] have been introduced, inspired by the concept of importance sampling. Among them, DSIR [40] presents a framework for the data selection challenge by aiming to choose a subset from a large, unlabeled raw dataset that aligns with a specific target distribution, given a set of unlabeled target examples. It builds upon the tra- ditional importance resampling method, adapting it for data selection in large- scale models. DSIR operates as a scalable algorithm, determining importance weights within a reduced feature space and then selecting data based on these
17
importance resampling weights. In [34], the authors delve into the relationship between error scaling and dataset size. Their theoretical exploration suggests that by using a robust data pruning metric, which prioritizes which training examples to remove, the proposed method can suppress traditional power law scaling, potentially reaching exponential scaling for pruned dataset sizes. | 2309.10818#50 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 51 | hours to days hours to days hours minutes to hours Table 5. Labelling schemes compared. âCrowdâ are crowd workers via our in-house platform, âLLMâ is the best-performing prompt from private experiments. âLatencyâ is the time to the first usable labels, âcostâ is the dollar cost alone. These figures give an overall comparison, but please note that they depend on our particular computing resources, crowd contracts, assessor training, and other details.
# 4.7 Observations
We see somewhat better results than those reported by Faggioli et al. [2023], particularly in agreement on the raw labels (ð
). There are at least two factors at work. First, we are using a more capable model (GPT-4 with local modifications, compared to stock GPT-3.5); and second, our prompts are based on our experiences in Bing, and relatively long, whereas those of Faggioli et al. are simpler. Even small wording changes can make a difference (Figure 4), and selecting prompt features makes a bigger difference still (Table 2). Again, this demonstrates that time spent on this configurationâwhich is comparable to time spent on instruments and instructions for crowd or in-house workersâcan pay dividends. | 2309.10621#51 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.