doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.10691 | 51 | ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew J. Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam In Tristan Cazenave, Trischler. Textworld: A learning environment for text-based games. Abdallah Saffidine, and Nathan R. Sturtevant (eds.), Computer Games - 7th Workshop, CGW 2018, Held in Conjunction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers, volume 1017 of Communications in Computer and Information Science, pp. 41â75. Springer, 2018. doi: 10.1007/ 978-3-030-24337-1\ 3. URL https://doi.org/10.1007/978-3-030-24337-1_3. | 2309.10691#51 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 52 | ChatGLM 2-6B Vicuna 13B LLaMA 2 7B-chat LLaMA 2 13B-chat Chinese Alpaca 2-13B Baichuan 2-7B-chat Baichuan 2-13B-chat s e e t o n siti v 61.80% 61.00% 51.90% 53.40% 53.20% 78.20% 87.10% p i c s d is c ri m i n a ti o n p r o f a n it y u n e t h i c a l c 96.40% 99.10% 97.31% 98.03% 99.10% 98.32% 97.25% 95.23% 98.23% 97.25% 98.27% 99.04% 85.12% 96.34% 93.17% 96.00% 99.10% 97.12% 98.97% 99.10% 98.36% o n t e n t p h y si c a l h e a lt h 100.00% 99.80% 99.60% 100.00% 99.60% 100.00% 100.00% m e n t a l h e a lt h 98.23% 99.40% 98.23% 99.80% 99.31% 99.80% 99.80% fi n a y c a | 2309.10305#52 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 52 | These results show that LLMs are competent at labellingâat the minimum, with GPT-4 and in the TREC-Robust setting. The labels are as close to those from humans as we could expect, given the disagreement between people to begin with, and we can reasonably consistently identify the hardest queries, best runs, and best groups.
We now turn to LLM labelling at scale, in the context of a running search engine, where LLMs have proved not just more efficient but more accurate than the status quo.
# 5 LLM LABELLING IN USE: WEB SEARCH AT BING
The results above are on one corpusâTREC-Robustâ04, based on documents from the TREC ad-hoc collectionsâand labels from trained assessors working over simulated information needs. At Bing we have also seen good results with our web corpus, queries from real Bing use, and labels from searchers with real needs. Accordingly we have been using LLMs, in conjunction with a reduced number of human labellers, for most of our offline metrics since late 2022.
# 5.1 Experience with LLMs at Bing
At Bing we have made heavy use of crowd workers, for many years, to scale to the number of labels, languages, and markets we need. Despite systems for detecting and removing low quality labels and workers, this scale has come at a cost of natural biases, mistakes, and adversarial workers. | 2309.10621#52 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 52 | Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. CoRR, abs/2306.06070, 2023a. doi: 10.48550/arXiv.2306.06070. URL https://doi.org/10.48550/arXiv.2306.06070.
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023b.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32):e2123433119, 2022. | 2309.10691#52 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 52 | The training of large language models, such as GPT [29, 30, 4] and BERT [10], requires significant amounts of data to capture and generalize over the vast in- tricacies of human language. As a result, researchers often combine data from various sources, such as web text, Github, Books, ArXiv, Wikipedia, etc. There are some related work and difficulties that have been explored in the context of data combination for training large language models. (1) Concatenation of diverse datasets: One of the simplest methods for combining data is to concate- nate various corpora, covering diverse topics, styles, and sources. This ensures that the model gets a broad view of the language. (2) WebText and similar cor- pora: For OpenAIâs GPT-2, a dataset called WebText [30] was curated by scrap- ing content from the internet. This kind of data provides a rich mix of formal, informal, factual, and opinionated text, thus offering diverse training material. (3) Balancing and weighting: Simply combining data may lead to issues if one source is overrepresented. Prior studies have applied weights to different data portions or ensure that the | 2309.10818#52 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 53 | In Table 5 we summarise our experiences with labelling to date, considering (top to bottom) full-time Bing employees (mainly scientists and engineers working on metrics); our best crowd workers, recruited and trained specifically for metrics problems and with close oversight; our general pool of crowd workers, subject to quality control but minimal training; and our LLM models, based on GPT-4. LLM models give us better accuracy at vastly reduced latency and cost. In current work with newer models and prompts, we expect to see a further increase in accuracy of 8â10% in some languages, with around five times the throughput.
Large language models can accurately predict searcher preferences
The prompts in use are confidential. In our case we include the URL, since this is always defined for web documents; we also include date, location, language and other information available from our logs. In our experience LLMs do remarkably well. They have proved more accurate than any third-party labeller, including staff; they are much faster end-to-end than any human judge, including crowd workers; they scale to much better throughput; and of course are many times cheaper. This has let us measure many more results than previously, with associated gains in sensitivity (we can see smaller effects if we label more things). The end-to-end speed, also much improved, is helping Bing engineers try more things and get more done.
# 5.2 Evaluating labellers and prompts | 2309.10621#53 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 53 | Patrick Fernandes, Aman Madaan, Emmy Liu, Ant´onio Farinhas, Pedro Henrique Martins, Amanda Bertsch, Jos´e G. C. de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, and Andr´e F. T. Martins. Bridging the gap: A survey on integrating (human) feedback for natural language gen- eration. CoRR, 2023.
10
Preprint.
Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation with self-play and in-context learning from ai feedback, 2023.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. PAL: program-aided language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol- ume 202 of Proceedings of Machine Learning Research, pp. 10764â10799. PMLR, 2023. URL https://proceedings.mlr.press/v202/gao23f.html. | 2309.10691#53 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 53 | Balancing and weighting: Simply combining data may lead to issues if one source is overrepresented. Prior studies have applied weights to different data portions or ensure that the combined dataset is balanced in terms of sources, styles, and other criteria. For instance, DoReMi [39] first trains a small proxy model using group distributionally robust optimization across domains, gen- erating domain weights (or mixture proportions) without relying on informa- tion from subsequent tasks. Following this, they utilize these domain weights to resample a dataset, on which then train a full-size model. (4) Multimodal Training: Combining text with other data forms, like images or sounds, can also enhance language model training, especially for tasks that require under- standing across modalities. | 2309.10818#53 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 54 | Table 9: The result of different chat models on our safety evaluation benchmarks.
&
# C-Eval5-shot
4
# MMLU65-shot
# 4 CMMLU 5-shot
60 50 40 30 20
220 440 660 880 Baichuan 2-7B Checkpoints (in billions of tokens) 1100 1320 1540 1760 1980 2200 2420 2640
Figure 7: The results of intermediary checkpoints of Baichuan 2-7B which will be released to the public.
Transformers (Vaswani et al., 2017). Kaplan et al. (2020) proposed the scaling laws for large model pre-training. By systematically analyzing model performance as parameters and data size increased, they provided a blueprint for the current era of massive models with hundreds of or even billions of parameters. | 2309.10305#54 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 54 | # 5.2 Evaluating labellers and prompts
In Bingâs case we have found breadth preferable to depth: that is, we prefer small data for many queries to the TREC- Robust approach of more data for fewer queries. All else being equal, we also prefer queries which resemble a real web search workload rather than the invented needs of TREC-Robust.
Our gold labels are, therefore, largely gathered in situ: from employees and contractors in the context of their normal search activity, and also from feedback from the general public. This data is collected at or close to the time of need, by people who had the need, and in view of a full SERP (including e.g. images, maps, and advertisements). These properties mean the data is very reliable: if a label says some document is good (or bad), it is almost certainly so in the eyes of the person who issued the query. | 2309.10621#54 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 54 | Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. CRITIC: large language models can self-correct with tool-interactive critiquing. CoRR, abs/2305.11738, 2023. doi: 10.48550/arXiv.2305.11738. URL https://doi.org/10. 48550/arXiv.2305.11738.
Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. CoRR, abs/2305.11554, 2023. doi: 10.48550/ arXiv.2305.11554. URL https://doi.org/10.48550/arXiv.2305.11554.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020. | 2309.10691#54 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 54 | # 7.4 Large Batch Training for Large Language Models
Large language models inherently possess a structure that supports paralleliza- tion, especially when optimized using techniques that allow for batch training. When computational resources permit, large batch sizes are favored to expe- dite the training of large models containing potentially millions or billions of parameters. At a fundamental level, larger batch sizes enhance the quality of each gradient update since they consider a more considerable chunk of the dataset. Conversely, a smaller batch size means that model parameter updates are based on gradients derived from a limited dataset portion. This smaller dataset slice might not comprehensively capture the intricate relationships be- tween features and labels. Therefore, it might seem that larger batch sizes con- sistently offer advantages in training. However, [19] pointed out that this per- spective does not factor in the modelâs capacity to generalize to new, unseen
18 | 2309.10818#54 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 55 | Seizing upon these scaling laws, organizations like OpenAI, Google, Meta, and Anthropic have engaged in a computing arms race to create ever- larger LLMs. Spurred by the OpenAIâs 175 billion parameters proprietary language model GPT-3 (Brown et al., 2020). The few-shot or even zero-shot ability of LLMs has revolved most natural language understanding tasks. From code generation to math-solving problems or even open- world scenarios. Specialized scientific LLMs like Galactica (Taylor et al., 2022) have also emerged to showcase the potential for large models to assimilate technical knowledge. However, raw parameter count alone does not determine model capability - Chinchilla (Hoffmann et al., 2022) demonstrated that scaling model capacity
according to the number of tokens, rather than just parameters, can yield better sample efficiency. | 2309.10305#55 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 55 | Our ground truth corpus comprises queries, descriptions of need, metadata like location and date, and at least two example results per query. Results are taggedâagain, by the real searcherâas being good, neutral, or bad and these tags may be reviewed by Microsoft staff prior to inclusion in our corpus. Similar to the TREC experiments above, from this we can derive pairs of preferred and non-preferred results and then treat labelling and scoring as a binary classification problem: the preferred result should score higher than the non-preferred, for all queries and pairs of results. Again, we can use pairwise agreement to evaluate the labels. At the time of these experiments our ground corpus comprised about 2.5 million such pairs, in about ten languages and from about fifty countries.
Using three labels does conflate small distinctions (âitâs a little bit betterâ, e.g. good vs neutral results) and large distinctions (âitâs a lot betterâ, good vs bad results), but our ground truth corpus has distinct advantages in that we can collect preferences from real searchers in their own context, and providing a preference is easier than providing absolute labels [Carterette et al. 2008]. Moreover, the focus on general labels maximises the reuse of the corpus as the definition of a good or bad result is unlikely to evolve over time, whereas subtle distinctions might be subject to change. | 2309.10621#55 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 55 | Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021.
Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Zhou Zhao, and Shinji Watanabe. Audiogpt: Understanding and generating speech, music, sound, and talking head. CoRR, abs/2304.12995, 2023a. doi: 10.48550/arXiv.2304.12995. URL https://doi.org/10.48550/arXiv. 2304.12995.
Shulin Huang, Shirong Ma, Yinghui Li, Mengzuo Huang, Wuhe Zou, Weidong Zhang, and Hai-Tao Zheng. Lateval: An interactive llms evaluation benchmark with incomplete information from lateral thinking puzzles. arXiv preprint arXiv:2308.10855, 2023b. | 2309.10691#55 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 55 | 18
data, nor the intricate, non-convex optimization landscape of contemporary large models. In practice, multiple studies [17, 19] have demonstrated that while larger batch sizes might hasten convergence, they can impair a modelâs generalization to new datasets, irrespective of the deep network type. This ob- served disparity has been named as the Generalization Gap. A method [17] to address this gap involves starting from a smaller batch size and gradually en- larging it as training advances. In our study, we explore this problem through a new and unique angle of progressive weight decay training.
# 8 Conclusion
We have presented SlimPajama-DC, a comprehensive study on understanding the data domain weights and combinations for training large language models. Notably, SlimPajama-DC can operate on compact models, and its advantages can be seamlessly transferred to models that are several times larger. This leads to a remarkable acceleration in training on the SlimPajama with the optimal sampling probabilities across domains for larger models. Through this, we aim to spark further exploration into data-centric methods to enhance the efficiency of large language model training.
# References | 2309.10818#55 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 56 | according to the number of tokens, rather than just parameters, can yield better sample efficiency.
Concurrent with the development of private LLMs, academic and non-profit efforts have worked to develop open-source alternatives like Bloom (Scao et al., 2022), OPT (Zhang et al., 2022) and Pythia (Biderman et al., 2023b). Although some open-source large language models contain up to 175 billion parameters, most are trained on only 500 billion tokens or less. This is relatively small considering that 7 billion parameter models can still significantly improve after being trained on trillions of tokens. Among those open-sourced models, LLaMA (Touvron et al., 2023b) and its successor LLaMA 2 (Touvron et al., 2023c) stands out for its performance and transparency. Which was quickly optimized by the community for better inference speed and various applications. | 2309.10305#56 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 56 | Our user-generated ground truth corpus gives us an evaluation which is independent of the labels from third-party judges. In particular, by measuring against user-generated labels we can identify cases where the model is more accurate than third-party human judges; if we only had third-party labels, we could identify labelling disagreements but not resolve them one way or the other. For AUC scores to be useful, of course the data must represent some population of interest: at Bing we stratify the triples by language and by important result attributes (for example recency, authority, or topicality). This is not a uniform sample but instead lets us identify areas of particular concern.
# 5.3 Monitoring the LLM system
The results above give us a good deal of confidence that a large language model, appropriately prompted, can produce high-quality labels for at least some of the aspects important to our ongoing evaluation. As an additional safety check, we routinely compare the LLMâs labels to those from (trained and qualified) assessors. Every week, we take a stratified sample of query:document pairs labelled by the model, chosen from amongst those that our experiments have used
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra | 2309.10621#56 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 56 | Ehsan Kamalloo, Aref Jafari, Xinyu Zhang, Nandan Thakur, and Jimmy Lin. HAGRID: A human-llm collaborative dataset for generative information-seeking with attribution. CoRR, abs/2307.16883, 2023. doi: 10.48550/arXiv.2307.16883. URL https://doi.org/10. 48550/arXiv.2307.16883.
Mina Lee, Percy Liang, and Qian Yang. Coauthor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. In Simone D. J. Barbosa, Cliff Lampe, Caroline Appert, David A. Shamma, Steven Mark Drucker, Julie R. Williamson, and Koji Yatani (eds.), CHI â22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, pp. 388:1â388:19. ACM, 2022a. doi: 10.1145/3491102.3502030. URL https://doi.org/10.1145/3491102.3502030. | 2309.10691#56 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 56 | # References
[1] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle OâBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397â2430. PMLR, 2023. 7
[2] Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Lau- rence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. arXiv preprint Gpt-neox-20b: An open-source autoregressive language model. arXiv:2204.06745, 2022. 10
[3] Sid Black, Gao Leo, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, Mar. 2021. If you use this software, please cite it using these metadata. 11, 12 | 2309.10818#56 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 57 | In addition to those foundation models, a lot of chat models have also been proposed to follow human instructions. Most of them fine-tune the foundation models to align with human (OpenAI, 2022; Wang et al., 2023). Those chat models have demonstrated a marked improvement in understanding human instructions and solving complex tasks (Chiang et al., 2023; Xu et al., 2023; Sun et al., 2023). To further improve alignment, (Ouyang et al., 2022) incorporates the Reinforcement Learning from Human Feedback (RLHF) approach. This involves learning from human preferences by training a reward model on human-rated outputs. Other methods such as direct preference optimization (DPO) (Rafailov et al., 2023) and reinforcement learning from AI feedback (RLAIF) (Bai et al., 2022b) have also been proposed to improve the RLHF both in terms of efficiency and effectiveness.
# 7 Limitations and Ethical Considerations | 2309.10305#57 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 57 | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
recently. Those are re-labelled by our reviewers, and we monitor for shifts either in disagreement rate or patterns of disagreement; any changes are investigated by a dedicated metrics team with expertise in both the crowd and LLM processes. In practice, large changes are rare, and resolved in favour of the LLM as often as in favour of the humans. Since we use a highly skilled set of judges this remains an expensive process, but it is relatively lightweight and to date has needed less than a day a week of employee time.
In addition to the human oversight of our LLM based labels we have a large set of queries that we consistently relabel. On a day-to-day basis we expect no change in the labels associated with this set; that is, the expected value of day ð labels â day ð + 1 labels is zero. This automated system is designed to monitor the health of labelling systems and provides a more rapid response than the human based evaluation. | 2309.10621#57 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 57 | Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael S. Bernstein, and Percy Liang. Evaluating human-language model interaction. CoRR, abs/2212.09746, 2022b. doi: 10.48550/arXiv.2212.09746. URL https://doi.org/10.48550/arXiv.2212.09746.
Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and Andrew M. Dai. Mindâs eye: Grounded language model reasoning through simula- tion. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023a. URL https://openreview.net/pdf? id=4rXMRuoJlai.
11
Preprint. | 2309.10691#57 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 57 | [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural informa- tion processing systems, 33:1877â1901, 2020. 7, 9, 18
[5] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebas- tian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. 15
[6] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. 10
19
[7] Together Computer. Redpajama: An open source recipe to reproduce llama train- ing dataset, 2023. 1, 3, 7, 11, 12, 17 | 2309.10818#57 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 58 | # 7 Limitations and Ethical Considerations
Like other large language models, Baichuan 2 also faces ethical challenges. Itâs prone to biases and toxicity, especially given that much of its training data originates from the internet. Despite our best efforts to mitigate these issues using benchmarks like Toxigen (Hartvigsen et al., 2022), the risks cannot be eliminated, and toxicity tends to increase with model size. Moreover, the knowledge of Baichuan 2 models is static and can be outdated or incorrect, posing challenges in fields that require up-to-date information like medicine or law. While optimized for Chinese and English for safety, the model has limitations in other languages and may not fully capture biases relevant to non-Chinese cultures.
Thereâs also the potential for misuse, as the model could be used to generate harmful or misleading content. Although we try our best efforts to balance safety and utility, some safety measures may appear as over-cautions, affecting the modelâs usability for certain tasks. We encourage users to make responsible and ethical use of Baichuan 2 models. Meanwhile, we will continue to optimize these issues and release updated versions in the future.
# References | 2309.10305#58 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 58 | Our system therefore sits somewhere between Clarke et al.âs âmanual verificationâ and âfully automatedâ op- tions [2023], with the scale of a fully automated system but some degree of control and quality assurance from manual verification. Disagreements, and analyses of these, can inform future developments of the metrics and the gold set as well as the LLM labeller.
We note, too, that although LLM labels are important to our evaluation they are only one part of a web-scale search system. Amongst other things, web search needs to account for spam, misinformation, piracy, and other undesirable material; needs to treat some topics carefully and with editorial input (health, finance, and others); and needs to account for diversity in the final ranking. Our LLM prompts are not intended to replace these or other safety systems.
# 6 POTENTIAL LIMITATIONS AND PITFALLS | 2309.10621#58 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 58 | 11
Preprint.
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, and Jie Tang. Webglm: Towards an efficient web-enhanced question answering system with human preferences. In Ambuj Singh, Yizhou Sun, Leman Akoglu, Dimitrios Gunopulos, Xifeng Yan, Ravi Kumar, Fatma Ozcan, and Jieping Ye (eds.), Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, pp. 4549â4560. ACM, 2023b. doi: 10.1145/3580305.3599931. URL https: //doi.org/10.1145/3580305.3599931. | 2309.10691#58 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 58 | 19
[7] Together Computer. Redpajama: An open source recipe to reproduce llama train- ing dataset, 2023. 1, 3, 7, 11, 12, 17
[8] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. FlashAt- tention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022. 14
[9] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language mod- In International conference on machine eling with gated convolutional networks. learning, pages 933â941. PMLR, 2017. 9
[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding, 2019. 18
[11] Nolan Dey, Gurpreet Gosal, Hemant Khachane, William Marshall, Ribhu Pathria, Marvin Tom, Joel Hestness, et al. Cerebras-gpt: Open compute-optimal language models trained on the cerebras wafer-scale cluster. arXiv preprint arXiv:2304.03208, 2023. 1, 9, 11 | 2309.10818#58 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 59 | # References
Yuvanesh Anand, Zach Nussbaum, Brandon Duderstadt, Benjamin Schmidt, and Andriy Mulyar. 2023. Gpt4all: Training an assistant-style chatbot with large scale data distillation from gpt-3.5-turbo. GitHub.
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. | 2309.10305#59 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 59 | Using LLMs for automated relevance labelling is a recent phenomenon, and initial evidence is promising to say the least. The field would, however, also benefit from acknowledging how little we understand potential limitations and negative externalities of these approaches. Language models are known to reproduce and amplify harmful stereotypes and biases of social import [Bender et al. 2021; Blodgett et al. 2020; Bolukbasi et al. 2016; Caliskan et al. 2017; Gonen and Goldberg 2019] and therefore there is an immediate need to study if and how these biases may also manifest in relevance labelling. These biases may further intensify existing representational and allocative harms from search systems [Noble 2018; Sweeney 2013]. Other forms of bias unrelated to concerns of demographic fairnessâsuch as under-estimating the relevance of longer documents [Hofstätter et al. 2020]âmay also manifest more systemically when relevance labels are solicited from LLMs rather than crowd-workers. It may be tempting to suggest employing a variety of different prompts and underlying LLMs to address this issueâsimilar to employing a diverse group of | 2309.10621#59 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 59 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents. CoRR, abs/2308.03688, 2023c. doi: 10.48550/ arXiv.2308.03688. URL https://doi.org/10.48550/arXiv.2308.03688.
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, arXiv preprint Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating llms as agents. arXiv:2308.03688, 2023d. | 2309.10691#59 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 59 | [12] Nathan Habib Sheon Han Nathan Lambert Nazneen Rajani Omar Sanseviero Lewis Tunstall Thomas Wolf Edward Beeching, Cl´ementine Fourrier. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_ llm_leaderboard, 2023. 10, 11, 17
[13] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The arXiv preprint pile: An 800gb dataset of diverse text for language modeling. arXiv:2101.00027, 2020. 7, 17 | 2309.10818#59 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 60 | Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.
Baichuan. 2023a. A 13b large language model developed by baichuan intelligent technology.
Baichuan. 2023b. A large-scale 7b pretraining language model developed by baichuan-inc. | 2309.10305#60 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 60 | It may be tempting to suggest employing a variety of different prompts and underlying LLMs to address this issueâsimilar to employing a diverse group of crowd-workersâbut that may or may not have the desired effect if the outputs across these variations are correlated and exhibit similar biases. The quality of LLM-generated relevance labels may also vary disproportionately for content that is in different languages, from different geographical locations, and for different demographic groups due to disparate availability of data across these dimensions that have been employed for LLM training. Efforts to address these biases may further create undesirable incentives for more pervasive data collection and user surveillance. | 2309.10621#60 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 60 | Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large lan- guage models. CoRR, abs/2304.09842, 2023. doi: 10.48550/arXiv.2304.09842. URL https: //doi.org/10.48550/arXiv.2304.09842.
Gr´egoire Mialon, Roberto Dess`ı, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozi`ere, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023. | 2309.10691#60 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 60 | [14] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Fos- ter, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, Sept. 2021. 10, 11, 17 [15] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Donât stop pretraining: Adapt language mod- els to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. 17
[16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understand- ing. In International Conference on Learning Representations, 2021. 10
[17] Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: clos- ing the generalization gap in large batch training of neural networks. Advances in neural information processing systems, 30, 2017. 15, 19 | 2309.10818#60 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 61 | Baichuan. 2023b. A large-scale 7b pretraining language model developed by baichuan-inc.
Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle OâBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023a. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397â2430. PMLR.
Stella Rose Biderman, Hailey Schoelkopf, Quentin G. Anthony, Herbie Bradley, Kyle OâBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023b. Pythia: A suite for analyzing large language models across training and scaling. ArXiv, abs/2304.01373. | 2309.10305#61 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 61 | Developers of search systems who evaluate using and optimise towards these LLM-based labels also risk falling into the trap of over-fitting to the idiosyncrasies of the LLM rather than towards improving true relevance, in line with Goodhartâs law [Chrystal and Mizen 2001; Goodhart 1975; Hoskin 1996; Thomas and Uminsky 2022]. Agreement with our in-situ or TREC gold labels suggests this is not yet a problemâwe are closer to the ground truth with LLMs than with third-party assessorsâbut this may change as large models play a bigger role in ranking or as web authors start optimising for LLM labels. LLM-generated relevance labels may also show bias towards ranking models that themselves
Large language models can accurately predict searcher preferences
. oO Our approach Real searcher 130% 5 Select via gold labels Generate few gold labels LLM Generate labels in bulk A - Employee Monitor with several methods Ploy e 120% Best crowd > Read and write guidelines § Generate some silver labels g © 110% 2 F4 s 7) ce Traditional approach 100% @ â Read guidelines Typical ~ Generate labels in bulk crowd â Monitor via silver and gold labels 90% 0% 200% 400% 600% 800% 1000% Relative cost | 2309.10621#61 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 61 | Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. Webgpt: Browser-assisted question-answering with human feedback. CoRR, abs/2112.09332, 2021. URL https://arxiv.org/abs/2112.09332.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.
OpenAI. Gpt-4 technical report, 2023.
# OpenAI API. URL https://openai.com/blog/openai-api. | 2309.10691#61 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 61 | [18] Angelos Katharopoulos and Franc¸ois Fleuret. Not all samples are created equal: In International conference on machine Deep learning with importance sampling. learning, pages 2525â2534. PMLR, 2018. 17
[19] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. 15, 18, 19
[20] Teun Kloek and Herman K Van Dijk. Bayesian estimates of equation system pa- rameters: an application of integration by monte carlo. Econometrica: Journal of the Econometric Society, pages 1â19, 1978. 17
[21] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Mu Ënoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code. Preprint, 2022. 14
20 | 2309.10818#61 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 62 | Fig. 5. Labelling options discussed in this work, along with the cost and accuracy we see at Bing. All else being equal, as experimenters we would like to move up and left in this space. A traditional approach uses gold and silver labels to improve crowd workers; we use gold labels to select LLMs and prompts.
incorporate LLMs, although if we are to truly embrace the lens of knowledge distillation in describing the evaluation and optimisation using these labels then those biases may at least be partially justified.
Biases may arise not just from LLMs learning spurious correlations with respect to its inputs, but due to the absence of certain information that human annotators would have access to (e.g. images and other non-textual content), and more subtly due to differences in what these models and humans pay attention to [Bolotova et al. 2020; Kazai et al. 2022]. Whether website designers can take advantage of such biases in LLMs-for-labelling systems to unfairly gain more exposure for their content, or whether large chunks of the web optimising towards what LLMs deem important leads to undesirable shifts in trends and homogenisation of online content, are also important topics for future research. Examples of the latter can be witnessed in other domains such as the impact of online streaming services on length of songs in the music industry.3 | 2309.10621#62 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 62 | OpenAI. Gpt-4 technical report, 2023.
# OpenAI API. URL https://openai.com/blog/openai-api.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022a.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS, 2022b. URL http://papers.nips.cc/paper_files/paper/2022/ hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html. | 2309.10691#62 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 62 | 20
[22] Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. Mining of massive data sets. Cambridge university press, 2020. 7
[23] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how mod- els mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 3214â3252, 2022. 10
[24] Zhuang Liu, Zhiqiu Xu, Joseph Jin, Zhiqiang Shen, and Trevor Darrell. Dropout reduces underfitting. In ICML, 2023. 15
[25] Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Dan S Weld. S2orc: The semantic scholar open research corpus. arXiv preprint arXiv:1911.02782, 2019. 14
[26] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 10 | 2309.10818#62 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 63 | Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. CoRR, abs/2107.03374. | 2309.10305#63 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 63 | Lastly, the ecological costs of these LLMs are still heavily debated [Bender et al. 2021; Bommasani et al. 2021; Dodge et al. 2022; Patterson et al. 2022, 2021; Wu et al. 2022] but represent an important aspect in which these models should continue to be studied and scrutinised as appropriate in near future and beyond.
# 7 CONCLUDING REMARKS
Evaluating information retrieval typically relies on relevance labels, and we have several options for collecting these. Figure 5 illustrates the options discussed in this paper, with the cost and accuracy we see at Bing. As experimenters, our goal is to move up and left, to greater accuracy and lower cost. Traditionally the goal has been to improve crowd labels, that is to move the bottom-left point higher up, and this has involved (i) collecting insight from real users (or
3https://www.theverge.com/2019/5/28/18642978/music-streaming-spotify-song-length-distribution-production-switched-on-pop-vergecast- interview
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra | 2309.10621#63 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 63 | Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. Fact-checking complex claims with program-guided reasoning. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 6981â7004. Association for Computational Linguistics, doi: 10.18653/v1/2023.acl-long.386. URL https://doi.org/10.18653/v1/ 2023. 2023.acl-long.386.
12
Preprint.
Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco T´ulio Ribeiro. ART: automatic multi-step reasoning and tool-use for large language models. CoRR, abs/2303.09014, 2023. doi: 10.48550/arXiv.2303.09014. URL https://doi. org/10.48550/arXiv.2303.09014. | 2309.10691#63 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 63 | [27] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. 5, 7, 8
[28] Ofir Press, Noah A Smith, and Mike Lewis. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409, 2021. 1, 9
[29] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. 18
Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 18 | 2309.10818#63 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 64 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023).
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Claude. 2023. Conversation with Claude AI assistant.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. | 2309.10305#64 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 64 | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
from experimenters themselves), (ii) turning these into guidelines, (iii) using trusted workers to read these guidelines and generate âsilverâ labels, and (iv) giving the same guidelines to crowd workers. The crowd workers are monitored against the silver labels, and improvements largely come from improving the guidelines.
Our approach is different: we collect high-quality gold labels from searchers themselves (searchers in situ at Bing, topic developers in TREC) and use these labels to evaluate and select prompts for a large language model. The labels we get from our model are high quality, and in practice are more useful than those from even trained assessors. They are of course cheaper to acquire, and easier to collect for new languages or other new context; but they are also more accurate than third-party labels at predicting the preference of real searchers. This has had a tangible effect on our operations: retraining parts of our ranker using labels from this model, while keeping all else constant, resulted in about six monthsâ relevance improvement in a single step. | 2309.10621#64 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 64 | Aaron Parisi, Yao Zhao, and Noah Fiedel. TALM: tool augmented language models. CoRR, abs/2205.12255, 2022. doi: 10.48550/arXiv.2205.12255. URL https://doi.org/10. 48550/arXiv.2205.12255.
Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. Gorilla: Large language model connected with massive apis. CoRR, abs/2305.15334, 2023. doi: 10.48550/arXiv.2305.15334. URL https://doi.org/10.48550/arXiv.2305.15334.
Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Apostolos Dedeloudis, Jackson Sargent, and David Jurgens. Potato: The portable text annotation tool. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2022. | 2309.10691#64 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 64 | Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. 18
[31] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021. 7
[32] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. 1, 9
[33] Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Joel SlimPajama: A 627B token cleaned and dedu- https://www.cerebras.net/blog/ Jacob R Steeves, Hestness, and Nolan Dey. plicated version of RedPajama. slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama, 2023. 1, 17 | 2309.10818#64 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 65 | Yiming Cui, Ziqing Yang, and Xin Yao. 2023. Efficient and effective text encoding for chinese llama and alpaca. arXiv preprint arXiv:2304.08177.
Tri Dao. 2023. FlashAttention-2: Faster attention with better parallelism and work partitioning.
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems.
Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In International conference on machine learning, pages 933â941. PMLR.
William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter The models with simple and efficient sparsity. Journal of Machine Learning Research, 23(1):5232â 5270. | 2309.10305#65 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 65 | Of the options described by Faggioli et al. [2023], our labelling is closest to âhuman verification: LLMs are considered crowdworkers, . . . controlled by a humanâ, although we do not deliberately vary the LLMâs characteristics. We do retain human oversight and audit examples of LLM output, although we do not audit every label. Quality control, and indeed measuring LLM quality in general, is (as anticipated by Faggioli et al.) difficult as in most cases our LLM is âbeyond humanâ quality and we can no longer rely on third-party assessors. Our gold collection, with queries and labels from real searches and real searchers, helps a great deal but of course searchers can still be swayed by distracting captions or unreliable results. (We review every query and URL in the corpus, but this only adds another human to the loop.) Contra Clarke et al., we do not see machine-made assessments degrading quality at all; nor do we consider them âvery expensiveâ, at least compared to trained annotators. | 2309.10621#65 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10818 | 65 | [34] Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. Advances in Neural Information Processing Systems, 35:19523â19536, 2022. 17, 18
[35] Philippe Tillet, Hsiang-Tsung Kung, and David Cox. Triton: an intermediate lan- guage and compiler for tiled neural network computations. In Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, pages 10â19, 2019. 14
[36] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. 3, 7, 15, 17
21
[37] https://github.com/mosaicml/llm-foundry. Llm foundry. Mosaicml, 2023. 15 | 2309.10818#65 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 66 | Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, MarcâAurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation low-resource and multilingual benchmark for machine translation.
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and MarcâAurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. | 2309.10305#66 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 66 | In some ways, this is an easy case: the language model was trained on web text and we are labelling web text. The notion of judging web pages is likely already encoded, although we do not have clear evidence for this. Further, the topics can be addressed in the corpus: they do not need any personal, corporate, or otherwise restricted data, nor any particular domain-specific knowledge not already found in the text. Using LLMs for labelling suggests new and more difficult applications, for example labelling private corpora where we cannot give human assessors access. From the experiments above, we cannot verify this will be effective, and this remains for future work. We have also measured our labels in part with test setsâboth TREC, and Bingâs corpusâwhich have clear task descriptions. If we were to sample a query load from a running system, we would not have these descriptions and our labels would be less accurate. We also have a capable model: Liang et al. [2022] saw large differences from model to model over a range of tasks, although given our observations in Section 4 this could also be due to model:prompt interactions. As new models emerge, their performance will of course need to be tested. | 2309.10621#66 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 66 | Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. Tool learning with foundation models. In arxiv, 2023a.
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023b. | 2309.10691#66 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 66 | 21
[37] https://github.com/mosaicml/llm-foundry. Llm foundry. Mosaicml, 2023. 15
Introducing mpt-7b: A new standard for open-source, commercially usable llms. Mosaicml blog, 2023. 3, 14, 15
[39] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V Le, Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up language model pretraining. arXiv preprint arXiv:2305.10429, 2023. 18
[40] Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy Liang. Data selection for language models via importance resampling. arXiv preprint arXiv:2302.03169, 2023. 17
[41] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hel- laswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. 10 | 2309.10818#66 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 67 | Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language understanding. In ICLR. OpenReview.net.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical arXiv problem solving with the math dataset. preprint arXiv:2103.03874.
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, and Scaling laws for et al. Scott Gray. 2020. autoregressive generative modeling. arXiv preprint arXiv:2010.14701. | 2309.10305#67 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 67 | As our models improve, we are also faced with increasing difficulty measuring our labels as our measures start to saturate [Faggioli et al. 2023]. We have found it necessary to build âharderâ gold sets over time, encoding finer distinctions to better distinguish labellers and prompts. There is no equivalent mechanism in TREC or other open data sets, and this may become pressing if and when LLM-based labelling becomes commonplace.
It is certainly possible to use large language models to label documents for relevance and therefore to evaluate search systems; it is possible to get performance comparable to TREC judges and notably better than crowd judges. There are many choices that make a difference, meaning we need metrics-for-metrics to distinguish a good from a bad system, as well as ongoing audits and human verification. True âgoldâ judgements (e.g. from TREC assessors or our ground-truth set) make it possible to experiment with prompt and metric design. We have found the approach productive at Bing, and have used it for greater speed, reduced cost, and substantial improvements in our running system.
Large language models can accurately predict searcher preferences
# ACKNOWLEDGMENTS | 2309.10621#67 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 67 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023c.
Machel Reid and Graham Neubig. Learning to model editing processes. In Findings of the Association for Computational. Association for Computational Linguistics, 2022.
Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. CoRR, abs/2206.05802, 2022. doi: 10.48550/arXiv.2206.05802. URL https://doi.org/10.48550/arXiv.2206.05802. | 2309.10691#67 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 68 | Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute- arXiv preprint optimal large language models. arXiv:2203.15556.
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C-eval: A multi-level multi-discipline chinese evaluation arXiv preprint suite for arXiv:2305.08322.
Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Chi Zhang, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2023. Beavertails: Towards improved safety alignment of llm via a human-preference dataset.
Youhe Jiang, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, and Bin Cui. 2023a. Osdp: Optimal sharded data parallel for distributed deep learning. arXiv preprint arXiv:2209.13258. | 2309.10305#68 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 68 | Large language models can accurately predict searcher preferences
# ACKNOWLEDGMENTS
We thank David Soukal and Stifler Sun for their effort developing and testing many iterations of Bingâs LLM labelling system. Ian Soboroff kindly provided TREC-Robust judging guidelines. Dave Hedengren, Andy Oakley, and colleagues at Bing provided useful comments on the manuscript.
# REFERENCES
Aashish Agarwal, Ankita Mandal, Matthias Schaffeld, Fangzheng Ji, Jhiao Zhan, Yiqi Sun, and Ahmet Aker. 2019. Good, neutral or bad news classification. In Proceedings of the Third International Workshop on Recent Trends in News Information Retrieval. 9â14.
Meysam Alizadeh, Maël Kubli, Zeynab Samei, Shirin Dehghani, Juan Diego Bermeo, Maria Korobeynikovo, and Fabrizio Gilardi. 2023. Open-source large language models outperform crowd workers and approach ChatGPT in text-annotation tasks. arXiv:2307.02179 [cs.CL] | 2309.10621#68 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 68 | Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761, 2023a. doi: 10.48550/arXiv.2302.04761. URL https: //doi.org/10.48550/arXiv.2302.04761.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools, 2023b.
Timo Schick, Jane A. Yu, Zhengbao Jiang, Fabio Petroni, Patrick S. H. Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. PEER: A collab- orative language model. In The Eleventh International Conference on Learning Representations,
13
Preprint. | 2309.10691#68 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 68 | Dataset Commoncrawl C4 GitHub Books ArXiv Wikipedia StackExchange Total Redpajama 72.6% 878B 14.4% 175B 59B 4.9% 26B 2.1% 28B 2.3% 24B 2.0% 20B 1.7% 100.0% 637B 100.0% 1.2T Slimpajama 52.2% 333B 26.7% 170B 33B 5.2% 27B 4.2% 29B 4.6% 24B 3.8% 21B 3.3% LLaMA 1 67.0% 670/938B 15.0% 150/210B 45/63B 4.5% 45/63B 4.5% 25/35B 2.5% 45/63B 4.5% 20/28B 2.0% 1.0/1.4T 100% Commoncrawl C4 GitHub Books Wikipedia WebText2 MassiveWeb News Total GPT3 0.0% 60.0% 180B 10.0% 0.0% 3.0% 0.0% 27.0% 16.0% 2.0% 3.0% 0.0% 22.0% 48.0% 0.0% 10.0% 0.0% 100.0% 600B 100.0% 300B | 2309.10818#68 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 69 | Zixuan Jiang, Jiaqi Gu, and David Z Pan. 2023b. Normsoftmax: Normalizing the input of softmax to accelerate and stabilize training. In 2023 IEEE International Conference on Omni-layer Intelligent Systems (COINS), pages 1â6. IEEE.
Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023. Cmmlu: Measuring massive multitask language understanding in chinese. | 2309.10305#69 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 69 | Peter Bailey, Nick Craswell, Ian Soboroff, Paul Thomas, Arjen P. de Vries, and Emine Yilmaz. 2008. Relevance Assessment: Are Judges Exchangeable and Does It Matter. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 667â674.
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
&
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of âbiasââ in NLP. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 5454â5476.
Valeria Bolotova, Vladislav Blinov, Yukun Zheng, W Bruce Croft, Falk Scholer, and Mark Sanderson. 2020. Do people and neural nets pay attention to the same words: studying eye-tracking data for non-factoid QA evaluation. In Proceedings of the ACM International Conference on Information and Knowledge Management. 85â94. | 2309.10621#69 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 69 | 13
Preprint.
ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023c. URL https:// openreview.net/pdf?id=KbYevcLjnc.
# ShareGPT data, 2023. URL https://huggingface.co/datasets/anon8231489123/
ShareGPT_Vicuna_unfiltered.
Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre CËot´e, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020. | 2309.10691#69 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 70 | Ilya Loshchilov and Frank Hutter. 2017. Decoupled arXiv preprint weight decay regularization. arXiv:1711.05101.
MosaicML. 2023. Introducing mpt-7b: A new standard for open-source, commercially usable llms.
Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. 2021. Efficient large-scale language model training on gpu clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1â15.
Xiaonan Nie, Xupeng Miao, Zhi Yang, and Bin Cui. 2022. Tsplit: Fine-grained gpu memory management In for efficient dnn training via tensor splitting. 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 2615â2628. IEEE. | 2309.10305#70 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 70 | Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in neural information processing systems 29 (2016).
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut,
Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv:2108.07258 [cs.LG] Andrei Broder. 2002. A taxonomy of web search. In ACM Sigir forum, Vol. 36. ACM New York, NY, USA, 3â10. Jake Brutlag. 2009. Speed matters for Google web search. Online: https://services.google.com/fh/files/blogs/google_delayexp.pdf. Downloaded 2023-09-14.. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science
356, 6334 (2017), 183â186. | 2309.10621#70 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 70 | Lei Shu, Liangchen Luo, Jayakumar Hoskere, Yun Zhu, Canoee Liu, Simon Tong, Jindong Chen, and Lei Meng. Rewritelm: An instruction-tuned large language model for text rewriting. CoRR, abs/2305.15685, 2023. doi: 10.48550/arXiv.2305.15685. URL https://doi.org/10. 48550/arXiv.2305.15685.
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, William Ngan, Spencer Poff, Na- man Goyal, Arthur Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. Blender- bot 3: a deployed conversational agent that continually learns to responsibly engage. CoRR, abs/2208.03188, 2022. doi: 10.48550/arXiv.2208.03188. URL https://doi.org/10. 48550/arXiv.2208.03188. | 2309.10691#70 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 71 | James Cross Onur Ãelebi Maha Elbayad Kenneth Heafield Kevin Heffernan Elahe Kalbassi Janice Lam Daniel Licht Jean Maillard Anna Sun Skyler Wang Guillaume Wenzek Al Youngblood Bapi Akula Loic Barrault Gabriel Mejia Gonzalez Prangthip Hansanti John Hoffman Semarley Jarrett Kaushik Ram Sadagopan Dirk Rowe Shannon Spruit Chau Tran Pierre Andrews Necip Fazil Ayan Shruti Bhosale Sergey Edunov Angela Fan Cynthia Gao Vedanuj Goswami Francisco Guzmán Philipp Koehn Alexandre Mourachko Christophe Ropers Safiyyah Saleem Holger Schwenk Jeff Wang NLLB Team, Marta R. Costa-jussà . 2022. No language left behind: Scaling human-centered machine translation.
OpenAI. 2022. Introducing chatgpt. Blog post openai.com/blog/chatgpt.
OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774.
OpenCompass. 2023. Opencompass: A universal evaluation platform for foundation models. https: //github.com/InternLM/OpenCompass. | 2309.10305#71 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 71 | 356, 6334 (2017), 183â186.
Ben Carterette, Paul N Bennett, David Maxwell Chickering, and Susan T Dumais. 2008. Here or there: Preference judgments for relevance. In Proceedings of the European Conference on Information Retrieval. 16â27.
Carlos Castillo, Debora Donato, Luca Becchetti, Paolo Boldi, Stefano Leonardi, Massimo Santini, and Sebastiano Vigna. 2006. A reference collection for web spam. SIGIR Forum 40, 2 (Dec. 2006), 11â24.
K. Alec Chrystal and Paul D. Mizen. 2001. Goodhartâs law: Its origins, meaning and implications for monetary policy. Prepared for the Festschrift in honour of Charles Goodhart. | 2309.10621#71 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 71 | Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Rest- gpt: Connecting large language models with real-world applications via restful apis. CoRR, abs/2306.06624, 2023. doi: 10.48550/arXiv.2306.06624. URL https://doi.org/10. 48550/arXiv.2306.06624.
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. Toolalpaca: Gen- eralized tool learning for language models with 3000 simulated cases. CoRR, abs/2306.05301, 2023. doi: 10.48550/arXiv.2306.05301. URL https://doi.org/10.48550/arXiv. 2306.05301. | 2309.10691#71 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 71 | GPT-3 Llama2 175B 7B SlimPajama-DC 1.3B DC-1 DC-2 DC-3 DC-4 DC-5 DC-6 Abstract Algebra Anatomy Astronomy Business Ethics Clinical Knowledge College Biology College Chemistry College Computer Science College Mathematics College Medicine College Physics Computer Security Conceptual Physics Econometrics Electrical Engineering Elementary Mathematics Formal Logic Global Facts High School Biology High School Chemistry High School Computer Science Humanities High School European History High School Geography Social Science High School Government And Politics Social Science High School Macroeconomics Social Science High School Mathematics High School Microeconomics High School Physics High School Psychology High School Statistics High School Us History High School World History Human Aging Human Sexuality International Law Jurisprudence Logical Fallacies Machine Learning Management Marketing Medical Genetics Miscellaneous Moral Disputes Moral Scenarios Nutrition Philosophy Prehistory Professional Accounting Professional Law Professional Medicine Professional Psychology Public Relations Security Studies Sociology Us Foreign Policy Virology World Religions STEM 30.0 STEM 48.0 STEM 49.0 46.0 Other 48.0 Other STEM 45.0 STEM 26.0 STEM 46.0 STEM 34.5 48.0 Other STEM 28.0 STEM 57.0 STEM 36.5 33.0 STEM | 2309.10818#71 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 72 | OpenCompass. 2023. Opencompass: A universal evaluation platform for foundation models. https: //github.com/InternLM/OpenCompass.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â 27744.
Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2022. Medmcqa: A large-scale multi- subject multi-choice dataset for medical domain the question answering. Conference on Health, Inference, and Learning,
volume 174 of Proceedings of Machine Learning Research, pages 248â260. PMLR.
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116. | 2309.10305#72 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 72 | Charles L A Clarke, Gianluca Demartini, Laura Dietz, Guglielmo Faggioli, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Ian Soboroff, Benno Stein, and Henning Wachsmuth. 2023. HMC: A spectrum of humanâmachine-collaborative relevance judgment frameworks. In Frontiers of Information Access Experimentation for Research and Education, Christine Bauer, Ben Carterette, Nicola Ferro, and Norbert Fuhr (Eds.). Vol. 13. Leibniz-Zentrum für Informatik. Issue 1.
Paul Clough, Mark Sanderson, Jiayu Tang, Tim Gollins, and Amy Warner. 2013. Examining the limits of crowdsourcing for relevance assessment. IEEE Internet Computing 17, 4 (2013).
Gordon V Cormack, Christopher R Palmer, and Charles L A Clarke. 1998. Efficient construction of large test collections. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 282â289. | 2309.10621#72 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 72 | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
US Bureau of Labor Statistics. Table b-3. average hourly and weekly earnings of all employees on private nonfarm payrolls by industry sector, seasonally adjusted, 2023. URL https://www. bls.gov/news.release/empsit.t19.htm. Accessed: 2023-9-3.
Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, and Heng Ji. Leti: Learning to generate from textual interactions. arXiv preprint arXiv:2305.10314, 2023a.
Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Large language models are cognitive synergists: Task solving through multi-persona self-collaboration. In arxiv, 2023b. | 2309.10691#72 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 72 | 48.0 Other STEM 45.0 STEM 26.0 STEM 46.0 STEM 34.5 48.0 Other STEM 28.0 STEM 57.0 STEM 36.5 33.0 STEM 50.0 STEM 30.0 29.0 Humanities 37.0 Other STEM 48.0 STEM 33.0 STEM 39.0 54.0 58.0 58.0 40.5 STEM 28.0 42.0 STEM 28.0 61.0 STEM 30.5 53.0 56.0 50.0 54.0 55.5 55.0 48.0 STEM 31.0 56.0 Other 60.0 Other 40.0 Other 60.0 Other 44.5 Humanities 26.0 Humanities 47.0 Other 51.0 Humanities 53.0 Humanities 33.0 Other 34.5 Humanities 36.0 Other 44.5 Social Science 48.0 Social Science 52.0 Social Science 53.0 Social Science 69.0 Social Science 46.0 Other 55.0 Humanities Social Science Social Science Social Science Humanities Humanities Other Social Science Humanities Humanities Humanities 29.0 37.0 33.6 40.0 35.1 37.5 32.0 29.0 33.0 30.6 26.5 45.0 36.6 23.7 26.9 24.3 27.0 29.0 34.5 28.1 31.0 44.2 34.3 | 2309.10818#72 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 73 | Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. corr abs/1802.05365 (2018). arXiv preprint arXiv:1802.05365.
Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.
Markus N Rabe and Charles Staats. 2021. Self-attention arXiv preprint does not need o(n2) memory. arXiv:2112.05682.
Alec Radford, Karthik Narasimhan, Tim Salimans, Improving language Ilya Sutskever, et al. 2018. understanding by generative pre-training.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. | 2309.10305#73 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 73 | Tadele T. Damessie, Taho P. Nghiem, Falk Scholer, and J. Shane Culpepper. 2017. Gauging the quality of relevance assessments using inter-rater agreement. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval.
Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of AI in cloud instances. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency. 1877â1894.
Susan Dumais, Robin Jeffries, , Daniel M. Russell, Diane Tang, and Jaime Teevan. 2014. Understanding user behavior through log data and analysis. In Ways of knowing in HCI, Judith S. Olson and Wendy A. Kellogg (Eds.). Springer, New York, 349â372. | 2309.10621#73 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 73 | Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id= gEZrGCozdqR.
Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, and Tao Yu. Lemur: Harmonizing natural language and code for language agents, 2023.
John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023a.
14
Preprint. | 2309.10691#73 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 73 | 26.5 45.0 36.6 23.7 26.9 24.3 27.0 29.0 34.5 28.1 31.0 44.2 34.3 44.6 35.4 24.8 31.9 26.5 47.3 35.2 39.7 40.9 40.8 36.6 51.2 38.9 39.3 23.2 35.0 46.6 43.0 42.4 40.2 24.3 37.6 39.9 36.1 25.9 30.2 44.5 35.1 40.9 31.8 46.8 46.0 30.1 50.9 27.0 23.0 25.0 24.0 30.2 23.6 26.0 37.0 35.0 26.0 24.5 24.0 27.7 24.6 29.0 26.2 35.7 30.0 25.8 27.6 29.0 23.6 34.3 35.2 34.4 26.7 23.5 27.8 32.3 21.3 24.5 29.1 14.8 28.2 26.5 26.9 19.6 17.9 26.2 22.2 27.0 22.5 29.5 27.3 28.1 28.0 26.5 27.0 27.1 19.9 26.3 33.6 | 2309.10818#73 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 74 | Guglielmo Faggioli, Laura Dietz, Charles Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, and Henning Wachsmuth. 2023. Perspectives on large language models for relevance judgment. arXiv:2304.09161 [cs.IR]
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. ChatGPT outperforms crowd-workers for text-annotation tasks. arXiv:2303.15056 [cs.CL] Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
609â614. | 2309.10621#74 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 74 | 14
Preprint.
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmented language models. CoRR, abs/2306.15626, 2023b. doi: 10.48550/arXiv.2306.15626. URL https://doi.org/10.48550/arXiv.2306.15626.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600, 2018.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2022. | 2309.10691#74 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 74 | 22.2 27.0 22.5 29.5 27.3 28.1 28.0 26.5 27.0 27.1 19.9 26.3 33.6 39.2 25.4 27.0 21.7 27.5 26.0 23.0 19.7 22.0 26.8 24.3 19.0 36.0 29.0 23.1 24.5 30.0 30.2 25.4 24.1 25.9 24.6 31.0 26.5 19.7 26.0 28.5 20.7 16.6 25.9 25.2 23.1 26.5 23.1 21.3 21.6 25.7 30.5 22.1 30.6 22.2 27.0 33.0 29.1 24.4 24.0 27.5 25.7 24.6 23.2 28.9 25.9 29.1 25.0 31.6 27.3 30.9 17.5 24.4 31.0 30.1 25.2 28.0 25.9 21.7 30.0 25.7 23.6 21.0 33.0 21.0 26.6 24.5 28.0 23.8 24.6 23.5 25.9 15.9 33.0 24.8 24.1 25.0 25.5 22.2 21.8 | 2309.10818#74 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 75 | Teven Le Scao, Angela Fan, Christopher Akiki, Elizabeth-Jane Pavlick, Suzana Iliâc, Daniel Hesslow, Roman Castagnâe, Alexandra Sasha Luccioni, Franccois Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Rose Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurenccon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa Etxabe, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris C. Emezue, Christopher Klamm, Colin Leong, Daniel Alexander van Strien, David | 2309.10305#75 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 75 | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
609â614.
Charles A E Goodhart. 1975. Problems of monetary management: The UK experience. In Papers in Monetary Economics. Vol. 1. Reserve Bank of Australia. Google LLC. 2022. General Guidelines. https://guidelines.raterhub.com/searchqualityevaluatorguidelines.pdf, Downloaded 29 July 2023.. William Hersh, Chris Buckley, TJ Leone, and David Hickam. 1994. OHSUMED: An interactive retrieval evaluation and new large test collection for
research. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 192â201.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop. http://arxiv.org/abs/1503.02531
Sebastian Hofstätter, Hamed Zamani, Bhaskar Mitra, Nick Craswell, and Allan Hanbury. 2020. Local self-attention over long text for efficient document retrieval. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021â2024. | 2309.10621#75 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 75 | Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig. Webarena: A realistic web environ- ment for building autonomous agents. CoRR, abs/2307.13854, 2023. doi: 10.48550/arXiv.2307. 13854. URL https://doi.org/10.48550/arXiv.2307.13854.
15
Preprint.
# A LIMITATIONS AND FUTURE WORK | 2309.10691#75 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 75 | 28.0 23.8 24.6 23.5 25.9 15.9 33.0 24.8 24.1 25.0 25.5 22.2 21.8 23.8 25.2 25.2 21.9 23.8 19.9 24.5 24.5 37.2 22.9 39.7 26.9 29.5 23.2 27.2 23.9 24.0 27.6 24.9 24.3 25.2 26.7 29.3 27.0 25.8 22.8 25.5 28.2 18.8 22.9 24.0 31.3 32.8 25.0 27.4 23.0 26.0 24.9 27.1 29.0 32.0 31.0 26.0 21.6 19.0 22.1 30.7 26.2 27.5 20.6 30.0 25.5 27.1 26.0 24.9 19.2 25.9 22.8 28.5 25.2 27.2 22.9 22.2 24.5 27.4 30.5 22.1 32.2 27.8 23.9 28.6 21.4 25.2 22.0 29.3 24.9 23.8 25.8 29.3 26.9 27.3 24.6 21.0 25.2 29.1 21.2 | 2309.10818#75 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 76 | Keith Hoskin. 1996. The âawfulâ idea of accountability: Inscribing people into the measurement of objects. In Accountability: Power, ethos and technologies of managing, R Munro and J Mouritsen (Eds.). International Thompson Business Press, London.
Oana Inel, Tim Draws, and Lora Aroyo. 2023. Collect, measure, repeat: Reliability factors for responsible AI data collection. arXiv:2308.12885 [cs.LG] Andrej Karpathy. 2023. State of GPT. Seminar at Microsoft Build. https://build.microsoft.com/en-US/sessions/db3f4859-cd30-4445-a0cd-553c3304f8e2. Gabriella Kazai, Bhaskar Mitra, Anlei Dong, Nick Craswell, and Linjun Yang. 2022. Less is Less: When are Snippets Insufficient for Human vs Machine
Relevance Estimation?. In Proceedings of the European Conference on Information Retrieval. 153â162.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners. arvix:2205.11916 [cs.CL] | 2309.10621#76 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 76 | 15
Preprint.
# A LIMITATIONS AND FUTURE WORK
We simulate the natural language feedback of human users with GPT-4. Despite showing in a human experiment that it is similar to human-written feedback, however, GPT-4 simulated might not cover all the possible responses from real-human users and may not suitably simulate every aspect of human feedback, particularly in tasks (e.g., policy-making) that involve nuanced judgments of human values. While the focus of our work lies on LLMâs in-context multi-turn interaction, we have yet to explore the potential of directly leveraging language feedback for model training and improvement similar to Wang et al. (2023a), which we leave for future work. Furthermore, our metrics may not fully assess the quality of the interaction process beyond outcomes. For example, models repetitively guessing to get higher scores should be penalized. Despite our best efforts to ensure our benchmark contains challenging and comprehensive tasks, there is still a wide range of tools (Qin et al., 2023c) and real-world use cases (e.g., web-browsing Deng et al. (2023b), operating system Liu et al. (2023d)) that MINT did not cover. Instead of making this benchmark a one-time effort, we hope to continuously improve this benchmark by integrating more challenging tasks and tools as LLMs get better. | 2309.10691#76 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 76 | 22.0 29.3 24.9 23.8 25.8 29.3 26.9 27.3 24.6 21.0 25.2 29.1 21.2 24.9 25.0 27.1 29.8 27.0 34.1 27.0 24.0 18.9 25.7 19.0 36.0 25.0 27.8 22.6 27.0 28.5 23.7 29.0 25.1 16.7 37.0 24.8 27.1 27.0 26.7 17.7 21.8 24.6 26.7 21.4 29.8 23.7 23.2 27.5 25.7 27.4 25.2 30.6 25.0 27.6 30.4 23.3 28.2 23.0 26.2 24.0 24.6 25.8 28.3 27.5 27.0 26.9 27.9 27.5 26.4 16.3 23.9 28.0 28.3 32.2 21.0 19.3 20.4 28.0 26.0 31.9 25.0 33.0 36.0 24.9 21.6 27.0 24.3 29.8 28.3 23.5 29.4 17.0 21.9 25.1 27.0 20.6 18.2 21.8 32.8 | 2309.10818#76 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 77 | Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jorg Frohberg, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro von Werra, Leon Weber, Long Phan, Loubna Ben Allal, Ludovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, Marâia Grandury, Mario vSavsko, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian- Jian Jiang, Minh Chien Vu, Mohammad Ali Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla A. Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Lâopez, Rui Ribeiro, Salomey Osei, Sampo | 2309.10305#77 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 77 | Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models. arXiv:2211.09110 [cs.CL] | 2309.10621#77 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 77 | # B DETAILS OF HUMAN EVALUATION
We perform two stages of human annotation using the Potato annotation interface (Pei et al., 2022). In the first stage, we ask two human annotators (A and B) to provide language feedback for a trajectory. We randomly sample 2 instances of interaction trajectories per task from a subset of 8 evaluated LLMs to maximize diversity (in Tab. 3). We filter out task instances that succeed in the first turn (i.e., no need for feedback), resulting in 113 interaction trajectories for annotation. We randomly select a turn for each task trajectory and remove all interactions and GPT-4 generated feedback after that turn. We randomly divide the 113 instances into two subsets and assign each subset to one human annotator. Given previous interaction history, human annotators A and B are asked to provide a turn of natural language feedback as if interacting with ChatGPT. Annotation of each feedback, on average, takes 96 seconds. According to US Bureau of Labor Statistics (2023), U.S. private non-farm worker average about $33.82 hourly wage (Aug 2023), which translate to an annotation cost of $90 per 100 turns of feedback. | 2309.10691#77 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 78 | Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Lâopez, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, S. Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, | 2309.10305#78 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 78 | Tie-Yan Liu. 2009. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3, 3 (2009), 225â331. Safiya Umoja Noble. 2018. Algorithms of oppression. In Algorithms of oppression. New York University Press. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R So, Maud Texier, and Jeff Dean.
2022. The carbon footprint of machine learning training will plateau, then shrink. Computer 55, 7 (2022), 18â28.
David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. (2021). arXiv:2104.10350 [cs.LG]
Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. 2023. Automatic prompt optimization with âgradient descentâ and beam search. arXiv:2305.03495 | 2309.10621#78 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 78 | In the second stage, we ask two different human annotators (C and D) to compare human-annotated feedback (from the first stage) and GPT-4 generated feedback (from the original trajectory) on two dimensions: helpfulness and human-like. Specifically, helpfulness means whether feedback is help- ful for the LLM to succeed in this task, while human-like focuses on the literal similarity of feedback and human usage. For each dimension, we ask them to determine which feedback is better (i.e., more helpful or human-like) or both are equally good.
C ABLATION STUDY
C.1 HOW DO FEEDBACK VARIATIONS IMPACT FEEDBACK QUALITY âFEEDBACK?
To gain deeper insights into the effects of various feedback settings on enhancing the performance of language models, we perform an ablation study on feedback by controlling feedback informativeness and frequency. See §F.4.2 for detailed implementation. We present the results in Tab. A.6.
C.1.1 INFORMATIVENESS
We define informativeness in two dimensions: (1) whether the generated feedback is conditioned on the ground-truth solution (w/ GT) or not (w/o GT, default setting); (2) whether the feedback provided to LLM is textual (default setting) or binary (i.e., good vs. bad). | 2309.10691#78 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 79 | Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal V. Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben- David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Févry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiang Tang, Zheng Xin Yong, Zhiqing Sun, Shaked Brody, Y Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre Franccois Lavallâee, Rémi Lacroix, Samyam | 2309.10305#79 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.