doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.12284 | 68 | [46] OpenAI. GPT-3.5. Technical Report, 2022.
[47] OpenAI. GPT-3.5-Turbo. Technical Report, 2022.
[48] OpenAI. GPT-4. Technical Report, 2023.
[49] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training Language Models to Follow Instructions with Human Feedback. In Neural Information Processing Systems, 2022.
[50] W. Park, D. Kim, Y. Lu, and M. Cho. Relational Knowledge Distillation. In Computer Vision and Pattern Recognition, 2019. | 2309.12284#68 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 69 | [50] W. Park, D. Kim, Y. Lu, and M. Cho. Relational Knowledge Distillation. In Computer Vision and Pattern Recognition, 2019.
[51] G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, H. Alobeidli, B. Pannier, E. Almazrouei, and J. Launay. The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only. Preprint arXiv:2306.01116, 2023.
[52] Z. Qiu, W. Liu, T. Xiao, Z. Liu, U. Bhatt, Y. Luo, A. Weller, and B. Sch¨olkopf. Iterative Teaching by Data Hallucination. In Artificial Intelligence and Statistics, 2023.
[53] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language Models are Unsupervised Multitask Learners. Technical Report, 2019. | 2309.12284#69 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 70 | [54] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 2020.
[55] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal Policy Optimization Algorithms. Preprint arXiv:1707.06347, 2017.
[56] P. Shen, X. Lu, S. Li, and H. Kawai. Feature Representation of Short Utterances Based on Knowledge Distillation for Spoken Language Identification. In International Speech Communi- cation Association, 2018.
[57] K. Shridhar, A. Stolfo, and M. Sachan. Distilling Reasoning Capabilities into Smaller Language Models. In Findings of the Association for Computational Linguistics, 2023.
[58] A. Talmor, J. Herzig, N. Lourie, and J. Berant. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. In North American Chapter of the Association for Computational Linguistics, 2019.
13
# Technical Report | 2309.12284#70 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 71 | 13
# Technical Report
[59] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. Hashimoto. Stanford Alpaca: An Instruction-following LLaMA Model. Technical report, 2023.
[60] R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez, and R. Stojnic. Galactica: A Large Language Model for Science. Preprint arXiv:2211.09085, 2022.
[61] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. LLaMA: Open and Efficient Foundation Language Models. Preprint arXiv:2302.13971, 2023. | 2309.12284#71 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 72 | [62] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Ba- tra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Ferrer, M. Chen, G. Cucurull, D. Es- iobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schel- ten, R. Silva, E. Smith, | 2309.12284#72 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 74 | [63] B. Wang and A. Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. Technical Report, 2021.
[64] P. Wang, L. Li, L. Chen, F. Song, B. Lin, Y. Cao, T. Liu, and Z. Sui. Making Large Language Models Better Reasoners with Alignment. Preprint arXiv:2309.02144, 2023.
[65] T. Wang, J. Zhu, A. Torralba, and A. Efros. Dataset Distillation. Preprint arXiv:1811.10959, 2018.
[66] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou. Self-Consistency Improves Chain of Thought Reasoning in Language Models. In International Conference on Learning Representations, 2023.
[67] J. Wei, X. Wang, D. Schuurmans, Maarten Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Neural Information Processing Systems, 2022. | 2309.12284#74 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 75 | [68] Y. Weng, M. Zhu, F. Xia, B. Li, S. He, K. Liu, and J. Zhao. Large Language Models are Better Reasoners with Self-Verification. Preprint arXiv:2212.09561, 2023.
[69] Z. Yuan, H. Yuan, C. Li, G. Dong, C. Tan, and C. Zhou. Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. Preprint arXiv:2308.01825, 2023.
[70] X. Yue, X. Qu, G. Zhang, Y. Fu, W. Huang, H. Sun, Y. Su, and W. Chen. MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. Preprint arXiv:2309.05653, 2023.
[71] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia, W. Tam, Z. Ma, Y. Xue, J. Zhai, W. Chen, P. Zhang, Y. Dong, and J. Tang. GLM-130B: An Open Bilingual Pre-trained Model. Preprint arXiv:2210.02414, 2022. | 2309.12284#75 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 76 | [72] B. Zhao, K. Mopuri, and H. Bilen. Dataset Condensation with Gradient Matching. In Interna- tional Conference on Learning Representations, 2021.
[73] C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, S. Zhang, G. Ghosh, M. Lewis, L. Zettlemoyer, and O. Levy. LIMA: Less Is More for Alignment. Preprint arXiv:2305.11206, 2023.
[74] D. Zhou, N. Sch¨arli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. Le, and E. Chi. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In International Conference on Learning Representations, 2023.
[75] X. Zhu. Machine Teaching: An Inverse Problem to Machine Learning and an Approach Toward Optimal Education. In AAAI Conference on Artificial Intelligence, 2015.
14
Technical Report
A PROMPTS
A.1 REPHRASING PROMPTS
# Example A.1: Prompt for Rephrasing GSM8K Questions | 2309.12284#76 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 77 | 14
Technical Report
A PROMPTS
A.1 REPHRASING PROMPTS
# Example A.1: Prompt for Rephrasing GSM8K Questions
You are an AI assistant to help me rephrase questions. Follow the given examples.
Question: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? Rephrase the above question: What is the amount of money that Olivia has left after purchasing five bagels for $3 each, if she initially had $23?
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Rephrase the above question: After losing 23 golf balls on Tuesday and an additional 2 on Wednesday, how many golf balls does Michael have left if he initially had 58 golf balls? | 2309.12284#77 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 78 | Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day? Rephrase the above question: Angelo and Melanie need to study 2 chapters in their textbook and 4 worksheets for their upcoming test. They have planned to dedicate 3 hours for each chapter and 1.5 hours for each worksheet. They can study for a maximum of 4 hours each day, taking into account 10-minute breaks every hour, 3 10-minute snack breaks per day, and 30 minutes for lunch. How many days do they need to study in total over the next week to complete their study plan? | 2309.12284#78 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 79 | Question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? Rephrase the above question: If Leah had 32 chocolates and her sister had 42, and they both consumed 35 chocolates, what is the total number of chocolates that they have left?
Question: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? Rephrase the above question: If there were initially nine computers in the server room and five more computers were added each day from Monday to Thursday, what is the current total number of computers in the server room?
Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? Rephrase the above question: If Jason initially had 20 lollipops and now has 12 after giving some to Denny, how many lollipops did he give to Denny? | 2309.12284#79 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 80 | Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he make in total, in dollars? Rephrase the above question: Sam purchased 12 boxes, each containing 30 highlighter pens, at $10 per box. He repackaged five of these boxes into sets of six highlighters and sold them for $3 per set. He sold the remaining highlighters individually at a rate of three pens for $2. What is the total profit he made in dollars?
Question: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? Rephrase the above question: If there were initially 15 trees in the grove and the grove workers are planning to plant more trees today, resulting in a total of 21 trees, how many trees did the workers plant today?
Question: {Q} Rephrase the above question:
15
Technical Report
# Example A.2: Prompts for Rewriting Question with Answer into a Declarative Statement | 2309.12284#80 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 81 | Question: {Q} Rephrase the above question:
15
Technical Report
# Example A.2: Prompts for Rewriting Question with Answer into a Declarative Statement
You are an AI assistant to help me rewrite question into a declarative statement when its answer is provided. Follow the given examples and rewrite the question.
Question: How many cars are in the parking lot? The answer is: 5. Result: There are 5 cars in the parking lot. ... Question: {Q} The answer is: {A}. Result:
A.2 EXPERIMENTAL DETAILS
Training Details. For the fully fine-tuning setting, we use the AdamW optimizer to train the model with 3 epochs and the batch size is 128. We use 8 NVIDIA A100 GPUs to train the 7B and 13B models, the learning rate is set as 2e-5 with a 3% learning rate warmup. For the 70B model QLoRA fine-tuning, the LoRA rank and alpha are 96 and 16, with a 0.05 dropout between the two matrices. The LoRA matrices are append in both the attention layer and the mlp layer. We use the same AdamW optimizer but with a 1e-4 learning rate and without a learning rate warmup. The Training Prompt 1 are basically from Alpaca [59], where the instruction is replaced by the MetaMathQA question. | 2309.12284#81 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 82 | # Prompt 1: Training Prompt
Below is an instruction that describes a task. Write a response that appropriately completes the re- quest.
### Instruction:
{instruction}
### Response:
# Prompt 2: Evaluation Prompt
Below is an instruction that describes a task. Write a response that appropriately completes the re- quest.
### Instruction:
{instruction}
### Response: Letâs think step by step.
Evaluation Prompting. Different from the few-shot prompting evaluation for closed-source models, we find that zero-shot prompting is better for finetuned LLMs, which also saves more inference costs. Hence, MetaMath uses the zero-shot Evaluation Prompt 2 for GSM8K and MATH, where the instruction is replaced by the testing question. We set the temperature as 0 for fine-tuned LLaMA model.
Answer Extraction. Different from the Wei et al. [67], where they use complex string rules to extract the final answer. In line with WizardMath [38], MetaMath only extracts the string behind The answer is: as the final answer. To teach the model this extraction method, we append The answer is: {gold answer} to the end of answers in the MetaMathQA dataset, where the gold answer is replaced by the respective questionâs answer.
16
Technical Report | 2309.12284#82 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 84 | Question: A robe takes 2 bolts of blue fiber and x% that much white fiber. It takes a total of 3 bolts. What is the value of unknown variable x? (Ground-truth answer is 50) SFT Answer: We know that the robe takes 2 * x% = 3 bolts. So 2 * x% = 3. So x% = 3/2 = 60%. So x = 60. #### 60. The answer is: 60 RFT Answer: We know that the robe takes 2 bolts of blue fiber and x% that much white fiber, so it takes 2 * 0.01 = 0.02 bolts of white fiber. So in total, it takes 2 + 0.02 = ¡¡2+0.02=2.02¿¿2.02 bolts of blue and white fiber. Since we know that it takes 3 bolts in total, we can write the equation 2.02 = 3. We can solve for x by multiplying both sides by 3, getting 6.04 = 3 * 0.02. We can then write the equation 6.04 = 3, and solve for x by dividing both sides by 6.04, getting x = 0.02. #### 2 The answer is: 2 WizardMath Answer: Step 1: Define the variables. | 2309.12284#84 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 85 | by dividing both sides by 6.04, getting x = 0.02. #### 2 The answer is: 2 WizardMath Answer: Step 1: Define the variables. Let x% of the blue fiber be represented by the variable y. Then, the robe takes 2y bolts of blue fiber. Step 2: Set up the equation. Since the robe takes a total of 3 bolts, we can write the equation as follows: 2y + xy = 3 Step 3: Simplify the equation. We can simplify the equation by combining the terms with y: 2y + xy = 3 3y + xy = 3 Step 4: Solve for x. To solve for x, we need to isolate xy on one side of the equation. We can do this by subtracting 3y from both sides: 3y + xy - 3y = 3 - 3y xy = 3 - 3y Step 5: Find the value of x. Now, we need to find the value of x. We know that xy = 3 - 3y, and we want to express y in terms of x. To do this, we can divide both sides of the equation by x: (3 - 3y) / x = y Step 6: Solve for y. Now, we can solve for y by multiplying both sides | 2309.12284#85 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.12284 | 86 | of the equation by x: (3 - 3y) / x = y Step 6: Solve for y. Now, we can solve for y by multiplying both sides of the equation by x: 3 - 3y = xy 3 - 3y = x(3 - 3y) Step 7: Simplify the equation. We can simplify the equation by expanding and combining like terms: 3 - 3y = 3 - 3y Step 8: Find the value of x. Now, we can see that y = 0, which means that x% of the blue fiber is 0%. Since we are dealing with a percentage, the only possible value for x is 0. Therefore, the value of unknown variable x is 0. Final Answer: x = 0. The answer is: x Our Answer: The robe takes 2 bolts of blue fiber. It takes x% of that much white fiber, which means it takes (x/100) * 2 bolts of white fiber. In total, it takes 3 bolts. So, we can write the equation as: 2 + (x/100) * 2 = 3 Letâs simplify and solve for x: 2 + (2x/100) = 3 To isolate x, we subtract 2 from both sides of the equation: 2 + | 2309.12284#86 | MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | Large language models (LLMs) have pushed the limits of natural language
understanding and exhibited excellent problem-solving ability. Despite the
great success, most existing open-source LLMs (e.g., LLaMA-2) are still far
away from satisfactory for solving mathematical problem due to the complex
reasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned
language model that specializes in mathematical reasoning. Specifically, we
start by bootstrapping mathematical questions by rewriting the question from
multiple perspectives without extra knowledge, which results in a new dataset
called MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.
Experimental results on two popular benchmarks (i.e., GSM8K and MATH) for
mathematical reasoning demonstrate that MetaMath outperforms a suite of
open-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%
on GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same
size by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of
82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the
MetaMathQA dataset, the MetaMath models with different model sizes and the
training code for public use. | http://arxiv.org/pdf/2309.12284 | Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu | cs.CL, cs.AI | Technical Report, Work in Progress. Project Page:
https://meta-math.github.io/ | null | cs.CL | 20230921 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2308.09583"
},
{
"id": "2305.20050"
},
{
"id": "1707.06347"
},
{
"id": "2204.02311"
},
{
"id": "2211.09085"
},
{
"id": "2305.10403"
},
{
"id": "1812.00524"
},
{
"id": "2202.00132"
},
{
"id": "2309.12288"
},
{
"id": "2305.07759"
},
{
"id": "2309.04564"
},
{
"id": "2107.03374"
},
{
"id": "1811.10959"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2203.13474"
},
{
"id": "2308.01825"
},
{
"id": "2110.14168"
},
{
"id": "2308.07758"
},
{
"id": "2305.06161"
},
{
"id": "2309.05653"
},
{
"id": "2303.05398"
},
{
"id": "2210.06726"
},
{
"id": "2212.09561"
},
{
"id": "2211.12588"
},
{
"id": "1503.02531"
},
{
"id": "2210.11610"
},
{
"id": "1907.11692"
},
{
"id": "2306.08568"
},
{
"id": "2210.02414"
},
{
"id": "2305.14314"
},
{
"id": "2305.11206"
},
{
"id": "2309.02144"
},
{
"id": "2306.01694"
}
] |
2309.10305 | 0 | 3 2 0 2
p e S 0 2 ] L C . s c [
2 v 5 0 3 0 1 . 9 0 3 2 : v i X r a
# Baichuan 2: Open Large-scale Language Models
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Chao Yin, Chenxu Lv, Da Pan Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu Baichuan Inc.
# Abstract | 2309.10305#0 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 0 | 3 2 0 2
p e S 9 1 ] R I . s c [
1 v 1 2 6 0 1 . 9 0 3 2 : v i X r a
# Large language models can accurately predict searcher preferences
PAUL THOMAS, Microsoft, Australia SETH SPIELMAN, Microsoft, USA NICK CRASWELL, Microsoft, USA BHASKAR MITRA, Microsoft Research, Canada
Relevance labels, which indicate whether a search result is valuable to a searcher, are key to evaluating and optimising search systems. The best way to capture the true preferences of users is to ask them for their careful feedback on which results would be useful, but this approach does not scale to produce a large number of labels. Getting relevance labels at scale is usually done with third-party labellers, who judge on behalf of the user, but there is a risk of low-quality data if the labeller doesnât understand user needs. To improve quality, one standard approach is to study real users through interviews, user studies and direct feedback, find areas where labels are systematically disagreeing with users, then educate labellers about user needs through judging guidelines, training and monitoring. This paper introduces an alternate approach for improving label quality. It takes careful feedback from real users, which by definition is the highest-quality first-party gold data that can be derived, and develops an large language model prompt that agrees with that data. | 2309.10621#0 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 0 | 3 2 0 2
t c O 2 1 ] L C . s c [
2 v 1 9 6 0 1 . 9 0 3 2 : v i X r a
Preprint.
# MINT: EVALUATING LLMS IN MULTI-TURN INTER- ACTION WITH TOOLS AND LANGUAGE FEEDBACK
@
Xingyao Wang1â, Zihan Wang1,2ââ , Jiateng Liu1, Yangyi Chen1, Lifan Yuan1â , Hao Peng1, Heng Ji1 1 University of Illinois Urbana-Champaign, 2 Renmin University of China 1{xingyao6,zihanw,jiateng5,yangyic3,haopeng,hengji}@illinois.edu
# ABSTRACT | 2309.10691#0 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 1 | # Abstract
Large have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, most powerful LLMs are closed-source or limited in their capability for languages other than English. In this technical report, we present Baichuan 2, a series of large-scale multilingual language models containing 7 billion and 13 billion parameters, trained from scratch, on 2.6 trillion tokens. Baichuan 2 matches or outperforms other open-source models of similar size on public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan 2 excels in vertical domains such as medicine and law. We will release all pre-training model checkpoints to benefit the research community in better understanding the training dynamics of Baichuan 2.
1
# 1 Introduction | 2309.10305#1 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 1 | We present ideas and observations from deploying language models for large-scale relevance labelling at Bing, and illustrate with data from TREC. We have found large language models can be effective, with accuracy as good as human labellers and similar capability to pick the hardest queries, best runs, and best groups. Systematic changes to the prompts make a difference in accuracy, but so too do simple paraphrases. To measure agreement with real searchers needs high-quality âgoldâ labels, but with these we find that models produce better labels than third-party workers, for a fraction of the cost, and these labels let us train notably better rankers. CCS Concepts: ⢠Information systems â Test collections; Relevance assessment; ⢠Computing methodologies â Natural language generation.
Additional Key Words and Phrases: large language models, offline evaluation, labelling
# 1 LABELLING RELEVANCE
Relevance labelsâannotations that say whether a result is relevant to a searcherâs needâare essential for evaluating and improving information retrieval systems. Labels can come from (in decreasing order of both reliability and difficulty to obtain): (i) actual users, (ii) subject-matter experts, (iii) professional assessors (without subject-matter expertise), or (iv) crowd workers (without extensive training in the relevance assessment tasks). Label quality can be evaluated by comparing them to some gold standard labels [Saracevic 2008]. | 2309.10621#1 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 1 | To solve complex tasks, large language models (LLMs) often require multiple rounds of interactions with the user, sometimes assisted by external tools. How- ever, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users. These oversights contribute to discrepancies between re- search benchmark evaluations and real-world use cases. We introduce MINT, a benchmark that evaluates LLMsâ ability to solve tasks with multi-turn interactions by (1) using tools and (2) leveraging natural language feedback. To ensure repro- ducibility, we provide an evaluation framework where LLMs can access tools by executing Python code and receive usersâ natural language feedback simulated by GPT-4. We repurpose a diverse set of established evaluation datasets focusing on reasoning, coding, and decision-making and carefully curate them into a compact subset for efficient evaluation. Our analysis of 20 open- and closed-source LLMs offers intriguing findings. (a) LLMs generally benefit from tools and language feedback, with performance gains (absolute, same below) of | 2309.10691#1 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 2 | 1
# 1 Introduction
The field of large language models has witnessed promising and remarkable progress in recent years. The size of language models has grown from millions of parameters, such as ELMo (Peters et al., 2018), GPT-1 (Radford et al., 2018), to billions or even trillions of parameters such as GPT- 3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022; Anil et al., 2023) and Switch Transformers (Fedus et al., 2022). This increase in scale has led to significant improvements in the capabilities of language models, enabling more human-like fluency and the ability to perform a diverse range of natural language tasks. With the introduction of
ChatGPT (OpenAI, 2022) from OpenAI, the power of these models to generate human-like text has captured widespread public attention. ChatGPT demonstrates strong language proficiency across a variety of domains, from conversing casually to explaining complex concepts. This breakthrough highlights the potential for large language models to automate tasks involving natural language generation and comprehension. | 2309.10305#2 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 2 | This paper defines gold standard labels as those from the query topic originator [Bailey et al. 2008]. The originator could be a relevance assessor who develops their own query topic, then labels the results. Even better, the originator could be a real user who did the query in situ, knows exactly what they were trying to find, and gives careful feedback on whatâs relevant. If each search only has one originator, then their gold labels are the ones that all other labels should be evaluated against. Given a set of first-party labels, other parties (human or machine) can at best perfectly agree, but can never âoutperformâ the given gold labels.
Third-party assessors may disagree with gold because they misunderstand the userâs preference. If workers are systematically misunderstanding user needsâif the labels are biasedâthis cannot be fixed by getting more data. For
Authorsâ addresses: Paul Thomas, Microsoft, Adelaide, Australia, [email protected]; Seth Spielman, Microsoft, Boulder, USA, sethspielman@ microsoft.com; Nick Craswell, Microsoft, Seattle, USA, [email protected]; Bhaskar Mitra, Microsoft Research, Montreal, Canada, bmitra@microsoft. com.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra | 2309.10621#2 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 2 | and closed-source LLMs offers intriguing findings. (a) LLMs generally benefit from tools and language feedback, with performance gains (absolute, same below) of 1â8% for each turn of tool use and 2â17% with natural language feedback. (b) Better single-turn per- formance does not guarantee better multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised instruction-finetuning (SIFT) and reinforcement learning from human feedback (RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure progress and incentivize research in improving LLMsâ capabilities in multi-turn interactions, especially for open-source commu- nities where multi-turn human evaluation can be less accessible compared to com- mercial LLMs with a larger user base. 1 | 2309.10691#2 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 2 | This paper aims to understand the impacts of various data combina- tions (e.g., web text, wikipedia, github, books) on the training of large lan- guage models using SlimPajama. SlimPajama [33] is a rigorously dedupli- cated, multi-source dataset, which has been refined and further dedupli- cated to 627B tokens from the extensive 1.2T tokens RedPajama dataset [7] contributed by Together. Weâve termed our research as SlimPajama-DC, an empirical analysis designed to uncover fundamental characteristics and best practices associated with employing SlimPajama in the training of large language models. During our research with SlimPajama, two pivotal observations emerged: (1) Global deduplication vs. local deduplication. We analyze and discuss how global (across different sources of datasets) and local (within the single source of dataset) deduplications affect the (2) Proportions of high-quality/highly- performance of trained models. deduplicated multi-source datasets in the combination. To study this, we construct six configurations of SlimPajama dataset | 2309.10818#2 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 3 | While there have been exciting breakthroughs and applications of LLMs, most leading LLMs like GPT-4 (OpenAI, 2023), PaLM-2 (Anil et al., 2023), and Claude (Claude, 2023) remain closed-sourced. Developers and researchers have limited access to the full model parameters, making it difficult for the community to deeply study or fine-tune these systems. More openness and transparency around LLMs could accelerate research and responsible development within this rapidly advancing field. LLaMA (Touvron et al., 2023a), a series of large language models developed by Meta containing up to 65 billion parameters, has significantly benefited the LLM research community by being fully open- sourced. The open nature of LLaMA, along with other open-source LLMs such as OPT (Zhang et al., 2022), Bloom (Scao et al., 2022), MPT (MosaicML, 2023) and Falcon (Penedo et al., 2023), enables researchers to freely access the models for examination, experimentation, and further development. This transparency and access distinguishes LLaMA from other proprietary LLMs. By providing full access, the open-source LLMs have accelerated research | 2309.10305#3 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 3 | Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
example, consider a pool of workers who do not understand which queries are navigational [Broder 2002]. When a first-party user wants to navigate to a site, the third-party labels do not reward retrieval of that site. The resulting labels do not help us build a search system that performs well on navigational queries, and this canât be solved by getting more labels from the biased worker pool. Working with human labellers, especially crowd workers, can also lead to other well-documented problems including mistakes, other biases, collusion, and adversarial or âspammyâ workers [Clough et al. 2013; Inel et al. 2023; Thomas et al. 2022]. The resulting labels can be low-quality, and using them for training or making decisions will develop a retrieval system that makes similar errors. | 2309.10621#3 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 3 | # INTRODUCTION
To address complex tasks, a Large Language Model (LLM) often needs multiple rounds of inter- action with the user, sometimes aided by external tools (Schick et al., 2023b; ChatGPT Plugins; Mialon et al., 2023). LLMsâ performance during multiple turns of user-LLM exchanges is crucial in real-world applications: roughly 73% of Human-ChatGPT conversations contain more than one turn based on 94k entries of ShareGPT data (2023)2. Meanwhile, the ability to adapt to user-provided natural language feedback is also pivotal for their practical utility. However, current LLM evalua- tions predominantly focus on single-turn input-output (Hendrycks et al., 2020; Chen et al., 2021) and often overlook user-provided natural language feedback (Liu et al., 2023d; Deng et al., 2023b; Yang et al., 2023a; Shridhar et al., 2020), creating a discrepancy between real-world use cases and evaluation. Measuring how much LLMs can benefit from both tools and natural language feed- back during multi-turn interaction is essential to incentivize future research to improve LLMsâ capabilities in real-world scenarios. | 2309.10691#3 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 3 | performance of trained models. deduplicated multi-source datasets in the combination. To study this, we construct six configurations of SlimPajama dataset and train individual ones using 1.3B Cerebras-GPT [11] model with Alibi [28] and SwiGLU [32]. Our best configuration outperforms the 1.3B model trained on RedPajama using the same number of training tokens by a significant margin. All our 1.3B models are trained on Cerebras 16Ã CS-2 cluster with a total of 80 PFLOP/s in bf16 mixed precision. We further extend our discoveries (such as increasing data diversity is crucial after global deduplication) on a 7B model with large batch-size training. Our models and the separate SlimPajama- DC datasets are available at: link1 and original SlimPajama is at: link2. | 2309.10818#3 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 4 | The standard path to obtaining higher-quality labels involves multiple steps. The first is to learn about real users through interviews, user studies, direct feedback on their preferences and implicit feedback on their preferences such as clicks [Dumais et al. 2014]. Studying associated relevance labels, and looking for systematic mistakes, can indicate patterns where labellers are misunderstanding what users want. The final step is to educate labellers, by reference to guidelines or examples, to minimise future errors: for example, Google uses over 170 pages of guidelines to educate their search quality raters on what makes a good Google result [Google LLC 2022]. Asking labellers to follow guidelines should lead to improvements in their output, and that improvement can be measured against ground truth that either comes from real users (did labellers agree with real users?) or is based on our best understanding of user preferences (did labellers agree with examples carefully chosen by experts to agree with our best understanding of users?). | 2309.10621#4 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 4 | âEqual contribution. â Work done during internship at UIUC. 1Code is available on our project website: https://xingyaoww.github.io/mint-bench 2https://sharegpt.com/
1
Preprint.
To bridge these gaps, we introduce MINT. It is a benchmark for LLMs that measures their per- formance during multi-turn interaction, focusing on two particular capabilities (§2.1): (1) tool- augmented task-solving; (2) leveraging natural language feedback. MINT mirrors the real-world User-LLM-Tool collaborative problem-solving setting. To solve a problem, the LLM can use exter- nal tools by generating and executing Python programs and/or collecting natural language feedback to refine its solutions; the feedback is provided by GPT-4 (OpenAI, 2023), aiming to simulate hu- man users in a reproducible and scalable way.3 For a comprehensive evaluation, we include eight established datasets spanning reasoning, code generation, and decision-making (§2.2). To facili- tate affordable multi-turn evaluation, after collecting 29,307 diverse instances from existing datasets (Tab. 1), we filter and sub-sample a compact dataset of 586 challenging and representative instances that require multi-turn interaction to solve. 4 | 2309.10691#4 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 5 | Authors are listed alphabetically, correspondent: [email protected].
However, most open-source large language models have focused primarily on English. For instance, the main data source for LLaMA is Common Crawl1, which comprises 67% of LLaMAâs pre-training data but is filtered to English content only. Other open source LLMs such as MPT (MosaicML, 2023) and Falcon (Penedo et al., 2023) are also focused on English and have limited capabilities in other languages. This hinders the development and application of LLMs in specific languages, such as Chinese. | 2309.10305#5 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 5 | This paper introduces a new way of reaching very high-quality labels, that match real user preferences, by leveraging large language models (LLMs). In practice, LLM performance on any task can vary depending on the wording of the prompt [Zhang et al. 2022; Zhou et al. 2022]. Our approach is to get a small sample of feedback that perfectly reflects real user preferences, because they come from real users who did a careful job of giving feedback. We then choose a prompt for the LLM that generates labels, such that the labels have the best match with first-party ground truth. | 2309.10621#5 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 5 | We evaluate 4 closed- and 16 open-source LLMs with MINT. We measure LLMsâ tool-augmented task-solving capability by analyzing their performance from multi-turn tool use (§3.2). To assess the ability to leverage natural language feedback, we measure their performance upon natural language feedback by GPT-4 (§3.3). Our results show that:
All models benefit from tool interaction and natural language feedback, with absolute performance gains by 1â8% for each additional turn of tool use, and 2â17% with natural language feedback. ⢠Better single-turn performance does not necessarily lead to better multi-turn performance. For example, while Claude-2 outperforms its predecessor Claude-1 in single-turn evaluation, the latter benefit more from interaction and performs better with > 2 turns. | 2309.10691#5 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 5 | 2.1 Number of Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Dataset Token Frequency Statistics . . . . . . . . . . . . . . . . . 2.3 Dataset Processing Procedure . . . . . . . . . . . . . . . . . . . . 2.3.1 Low-length Document Filtering . . . . . . . . . . . . . . 2.3.2 Global Deduplication . . . . . . . . . . . . . . . . . . . . SlimPajama . 3.1 3.2 RefinedWeb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Network Architecture 4.2 Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Huggingface Leaderboard Evaluation with Harness . . | 2309.10818#5 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 6 | In this technical report, we introduce Baichuan 2, a series of large-scale multilingual language models. Baichuan 2 has two separate models, Baichuan 2-7B with 7 billion parameters and Baichuan 2-13B with 13 billion parameters. Both models were trained on 2.6 trillion tokens, which to our knowledge is the largest to date, more than double that of Baichuan 1 (Baichuan, 2023b,a). With such a massive amount of training data, Baichuan 2 achieves significant improvements over Baichuan 1. On general benchmarks like MMLU (Hendrycks et al., 2021a), CMMLU (Li et al., 2023), and C-Eval (Huang et al., 2023), Baichuan 2-7B achieves nearly 30% higher performance compared to Baichuan 1-7B. Specifically, Baichuan 2 is optimized to improve performance on math and code problems. On the GSM8K (Cobbe et al., 2021) and HumanEval (Chen et al., 2021) evaluations, Baichuan 2 nearly doubles the results of the Baichuan 1. In addition, Baichuan 2 also demonstrates strong performance on medical and legal domain tasks. On benchmarks such as MedQA | 2309.10305#6 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 6 | Using machine learning for labelling raises the question of circularity, since labels are used for training and optimising retrieval systems, which may use machine learning. Machine-learned models have long been employed for relevance estimation. These predicted or automatic relevance models are often trained on human relevance labels, and have historically been inferior in quality to the labels they were trained on. Because they are cheap to run, the machine learned models are employed as rankers, estimating relevance at a scale that would be impractical to achieve with human assessors, and focusing on optimising the relative ordering of items, particularly in top ranks. With GPT-4 [OpenAI 2023] and similar large language models, we are now observing a new opportunityâthe ability to augment relevance estimators with assessment guidelines as part of the promptâas well as a different kind of trade-off whereby LLM labels may match first-party gold labels more closely than some third-party human labels do. GPT-4 is still too inefficient to be deployed as a real-time ranking model serving web-scale query loads, where even a tenth of a second increase in query processing latency has been shown to negatively impact searchers [Brutlag 2009; Schurman and Brutlag 2009]. This creates a new opportunity to employ these automatic relevance assessments from GPT-4 for training and evaluating
more efficient ranking models, which may be seen as a form of knowledge distillation [Hinton et al. 2015]. | 2309.10621#6 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 6 | There is a notable gap between open- and closed-source LLMs in multi-turn interaction per- formance. For example, with the help of language feedback, even the best open-source model, Lemur-70b-chat-v1, lags behind the best closed-source model by 8.7% in absolute success rate. ⢠On most LLMs we evaluated, models trained with supervised instruction fine-tuning (SIFT, Wei et al., 2022) and reinforcement learning from human feedback (RLHF, Ouyang et al., 2022a) per- form worse in multi-turn settings regardless of the presence of language feedback. For example, SIFT hurts Codellama-34Bâs multi-turn performance by 11.1% and 15.4% (w/ feedback), and RLHF negatively affects LLaMA-2-70B by 8.5% and 8.7%, respectively. Notable exceptions are Vicuna-7B and Lemur-70b-chat-v1, where SIFT improves multi-turn interaction. | 2309.10691#6 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 6 | . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Huggingface Leaderboard Evaluation with Harness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 More Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Training Loss . . . 7B Training Data Combination . . . . . . . . . . . . . . . . . . . 6.1 6.2 7B Model Training Configurations . . . . . . . . . . . . . . . . . 6.3 Fast Training with Large Batch-size . . . . . . . . . . . . . . . . . 6.4 Progressive Training on Weight Decay . . . . . . . . . . . . . . . 6.5 Results of Pre-training and Instruction Tuning . . . . . . . . . . 7.1 RedPajama, SlimPajama and Others. . . . . . . . . . . . . . . . . 7.2 Data Processing and Optimization Approaches . . . . . . . . . . | 2309.10818#6 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 7 | more efficient ranking models, which may be seen as a form of knowledge distillation [Hinton et al. 2015].
For other annotation tasks there is evidence that LLMs can be comparable to crowd workers, using standard metrics such as agreement or correlation [Alizadeh et al. 2023; Gilardi et al. 2023; Törnberg 2023]. However, we argue it is more interesting to compare labels to a relatively small set of first-party ground truth, from real searchers. We can then ask how well different labellers doâhuman or LLMâin generating labels that match real user preferences. Our study shows that LLM labellers can do better on this task than several populations of human labellers. The worst are the crowd labellers, who are least diligent and least knowledgeable about user preferences. Better are human raters who are more knowledgeable and diligent, as demonstrated by better agreement with first-party ground truth (gold). LLMs perform
Large language models can accurately predict searcher preferences
better on this metric than any population of human labellers that we study. Our results demonstrate the potential for LLMs as a tool for obtaining high-quality relevance labels that match what users think.
# 2 EXPERIMENTS: TREC-ROBUST | 2309.10621#7 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 7 | By fixing the LLM to evaluate and changing the feedback-provider LLM, MINT can measure dif- ferent LLMsâ capabilities in providing useful feedback (§3.4); We find that feedback-providing ability could be orthogonal to task-solving ability: despite performing the worst in task-solving, CodeLLaMA-34B-Instruct can provide feedback to improve the stronger GPT-3.5. Additionally, MINTâs challenging evaluation reveals undesired artifacts in ShareGPT data (2023), a widely used dataset for instruction tuning (§3.5). Furthermore, we show that GPT4-simulated language feedback is as helpful as human-written feedback based on human evaluation and task performance (§3.6).
We expect that MINT can help track progress and incentivize future research in improving LLMâs multi-turn task-solving and/or feedback-providing capabilities, especially for open-source commu- nities where human evaluation can be less accessible than commercial LLMs with a large user base.
# 2 MINT | 2309.10691#7 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10305 | 8 | Additionally, we also released two chat models, Baichuan 2-7B-Chat and Baichuan 2- 13B-Chat, optimized to follow human instructions. These models excel at dialogue and context understanding. We will elaborate on our approaches to improve the safety of Baichuan 2. By open-sourcing these models, we hope to enable the community to further improve the safety of large language models, facilitating more research on responsible LLMs development.
Furthermore, in spirit of research collaboration and continuous improvement, we are also releasing the checkpoints of Baichuan 2 at various stages
1https://commoncrawl.org/
of training from 200 billion tokens up to the full 2.6 trillion tokens. We found that even for the 7 billion parameter model, performance continued to improve after training on more than 2.6 trillion tokens. By sharing these intermediary results, we hope to provide the community with greater insight into the training dynamics of Baichuan 2. Understanding these dynamics is key to unraveling the inner working mechanism of large language models (Biderman et al., 2023a; Tirumala et al., 2022). We believe the release of these checkpoints will pave the way for further advances in this rapidly developing field. | 2309.10305#8 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 8 | # 2 EXPERIMENTS: TREC-ROBUST
To illustrate these ideas, we have experimented with queries, documents, and labels from TREC-Robust 2004 [Voorhees 2004]. Our main question was whether LLMs could replicate the original TREC labels, assigned by expert human assessors.
# 2.1 Machinery and data
TREC-Robust includes 250 topics (each with one canonical query, so âqueryâ and âtopicâ are synonymous in what follows)1. We took queries from the TREC title field; description and narrative were also included in some prompts, as discussed below.
Official labels were taken from the TREC-Robust qrel file. These labels were assigned by trained assessors, who had also provided the queries and topic descriptions, so although these are not ârealâ in situ search scenarios with a real product, they fit our definition of gold Bailey et al. [2008]: the person who labelled each document is the single best judge of what the query and topic mean, and what sort of document was responsive. If and when a third-party labeller (human or LLM) deviates from gold, it is considered an error with respect the the first-party data. | 2309.10621#8 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 8 | In this section, we discuss (1) how to evaluate multi-turn interaction (§2.1) with tool use and lan- guage feedback under different settings; (2) how we repurpose existing datasets for MINT evaluation (§2.2). We use Fig. 1 as a running example.
INTERACTION FRAMEWORK
MINT aims to simulate real-world applications of LLMs, emphasizing user-LLM and LLM-tool interaction. In a user-LLM collaborative problem-solving process, a human user provides initial instruction and aims to obtain a satisfactory solution with little effort in helping the LLM. On the
3We use gpt-4-0613 version in this work. 4Evaluating an LLM using MINT costs around 100 USD (â 3M prompt tokens and â 100K completion tokens) with feedback from gpt-4-0613 ($0.03/1K prompt tokens and $0.06/1K completion tokens), roughly 7% of the cost compared to hiring real-human annotators (§3.6).
2
# Preprint. | 2309.10691#8 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 8 | # 8 Conclusion
8 Conclusion A Data Proportion Details B MMLU 19 23 23
# A Data Proportion Details
# 1 Introduction
The success of modern large-scale models is deeply rooted in their training data. For large language models, the emphasis is not merely on generic text but on âdiverse textâ. To guarantee the modelâs linguistic expertise and its comprehensive understanding of the world, this text must span a broad spec- trum of domains, genres, languages, and more. Consequently, the composition
2
19
23 | 2309.10818#8 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 9 | In this technical report, we will also share some of the trials, errors, and lessons learned In the following through training Baichuan 2. sections, we will present detailed modifications made to the vanilla Transformer architecture and our training methodology. We will then describe our fine-tuning methods to align the foundation model with human preferences. Finally, we will benchmark the performance of our models against other LLMs on a set of standard tests. Throughout the report, we aim to provide transparency into our process, including unsuccessful experiments, to advance collective knowledge in developing LLMs. Baichuan 2âs foundation models and chat models are available for both research and commercial use at https://github.com/ baichuan-inc/Baichuan2
# 2 Pre-training
This section introduces the training procedure for the Baichuan 2 foundation models. Before diving into the model details, we first show the overall performance of the Baichuan 2 base models compared to other open or closed-sourced models in Table 1. We then describe our pre-training data and data processing methods. Next, we elaborate on the Baichuan 2 architecture and scaling results. Finally, we describe the distributed training system.
# 2.1 Pre-training Data | 2309.10305#9 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 9 | The original qrels files had 1031 âhighly relevantâ labels, 16 381 ârelevantâ, and 293 998 ânot relevantâ. In the first experiments below we used a stratified random sample of 1000 qrels for each label, 3000 labelled topic : document pairs in total. In later experiments we used all documents returned in Robust 2004 runs at ranks 1â100, where those documents were judged in TREC.
The experiments here used an in-house version of GPT-4 [OpenAI 2023], running on the Azure service. Temperature was set at zero, so the model would select the single most likely output; other parameters were top ð = 1, frequency penalty 0.5, presence penalty 0, with no stopwords.
# 2.2 Prompting
Having carefully selected our gold data, we consider a number of prompt template variants (determining LLM inputs) which is generally a cheap and fast way to improve quality [Karpathy 2023].
Figure 1 gives an overall schema for the prompts. Italicised words are placeholders, which were filled differently for each topic and document, or otherwise varied to match the rest of the prompt. Shaded text is optional and was included in some prompt variants. | 2309.10621#9 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 9 | ° , © User starts interaction User (optional, LLM-simulated) Interaction Turn #1 ! This is . [..] Now you can use this information to find out how! many insects Lily found. Q user Task: Together Lily, David, and Bodhi collected 43 insects. Lily found 7 more than David. David found half of what Bodhi found. How many insects did Lily find? Solution output format: an integer. Thought: Now that we know that Bodhi found 18 insects [...] Lily found 7 more than David, so Lily found 18 + 7 insects. LLM Execute: lily_insects = solution[0] + 7 Lily_insects Thought: To solve this problem, let's start by setting up equations [...] assume that David found x insects. Based on the given information, Interaction Turn #4 Lily found 7 more than David, so Lily found x + 7 insects [...] Execute: from sympy import symbols, Eq, solve & Python Interpreter x, b = symbols('x b') Observation: Out[3}: 25.0000000000000 # set up the equation You have 2 steps left and 2 chances to propose solution left. equation = | 2309.10691#9 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 9 | of the pretraining data domains, such as Github, Wikipedia, books, and web text like CommonCrawl, plays a critical role in the performance of large lan- guage models. In our research, we delve into the domain/source weightings of training data. Leveraging SlimPajama-DC, we investigate two primary areas: (1) global-level and local-level deduplication, and (2) the efficacy of various combinations of thoroughly deduplicated datasets. The first emphasis basi- cally encourages the model to be trained on all sources as no cross-domain overlaps inside, and the second helps us understand how to manage the in- tegration and proportions of diverse domains, especially as datasets for LLM training continue to expand in variety. Generic Deduplication. Multi-source datasets often combine data from var- ious origins, each with its unique distribution of information. When train- ing large language models, handling data redundancy is critical to ensure that the model generalizes well and does not exhibit undue biases, making train- ing faster and more efficient. Highly deduplicated datasets ensure that the model isnât repeatedly exposed to the same or very similar data | 2309.10818#9 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10621 | 10 | The prompt has four parts. The first part gives task instructions. These are closely based on instructions given to TREC assessors with two changes: First, the TREC instructions included material on consistency in labels, which is not relevant to an LLM case so was dropped here. Second, the phrase âyou are a search engine quality rater. . . â replaces some of the TREC text which discusses the assessorsâ past experience developing TREC tracks. The phrase âsearch engine quality raterâ is used by Google in its labelling efforts, and the phrase is widely used on the web, making it a useful shorthand.
1One query had no relevant documents. It is included in our analysis but will always score zero, on any metric, using the official labels.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
The second part of the prompt gives the query/document pair to be labelled: we include the query that the âsearcherâ issued; in some configurations we include a more detailed version of their intent from the TREC narrative field; and we give the text of the document itself. | 2309.10621#10 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 10 | b') Observation: Out[3}: 25.0000000000000 # set up the equation You have 2 steps left and 2 chances to propose solution left. equation = Eq(x + (x + 7) + (1/2) * b, 43) # solve the equation solution = solve(equation, x) ! This is NOT GOOD. You made a mistake in calculating the number of solution[0] : insects Lily found. You should have calculated the number of insects David found first, which is half of what Bodhi found User (optional, LLM-simulated) Interaction Turn #2 i & Python Interpreter _â, Thought: [Apologize ...] Bodhi found 18 insects [...] David found half of Grrcerceiaie Ci p Te) â Oz what Bodhi found, so David found (1/2) * 18 insects. Lily found 7 more You have 4 steps left and 2 chances to propose solution left. than David, so Lily found (1/2) "18 +7 insects 2. User (optional, LLM-simulated) Execute: david_insects = (1/2) * solution[o] lily_insects = david_insects + | 2309.10691#10 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 10 | train- ing faster and more efficient. Highly deduplicated datasets ensure that the model isnât repeatedly exposed to the same or very similar data points, mak- ing the training more efficient. Redundant data can slow down convergence and might make the model overfit to frequently seen patterns. Deduplication helps in efficient utilization of the modelâs capacity. In general, deduplication is the process of removing duplicate data to address this redundancy. Global Deduplication vs. Local Deduplication. The global deduplication pro- cess removes duplicates from the entire combined datasets. When weâre using data from multiple sources, there might be overlaps across sources. Global deduplication identifies and removes these overlapping instances irrespective of their source. In local deduplication, duplicates are removed within each in- dividual source dataset before merging them. However, if two source datasets have overlapping data, those duplicates will still be present in the final com- bined dataset since deduplication was only done locally within each dataset. In most current open-source LLM training data [7, 36, 38], only local dedupli- cation | 2309.10818#10 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 11 | GPT-4 GPT-3.5 Turbo 83.93 68.54 70.33 54.06 66.15 47.07 63.27 46.13 75.12 61.59 89.99 57.77 69.51 52.44 LLaMA-7B LLaMA 2-7B MPT-7B Falcon-7B ChatGLM 2-6B (base)â Baichuan 1-7B Baichuan 2-7B-Base 27.10 28.90 27.15 24.23 51.70 42.80 54.00 35.10 45.73 27.93 26.03 47.86 42.30 54.16 26.75 31.38 26.00 25.66 - 44.02 57.07 27.81 25.97 26.54 24.24 - 36.34 47.47 28.17 26.53 24.83 24.10 - 34.44 42.73 32.38 39.16 35.20 28.77 33.68 32.48 41.56 9.78 16.22 8.64 5.46 32.37 9.17 24.49 11.59 12.80 14.02 - - 9.20 18.29 LLaMA-13B 28.50 LLaMA 2-13B 35.80 Vicuna-13B 32.80 | 2309.10305#11 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 11 | The third part of the prompt restates the task, including the instruction to âsplit this problem into stepsâ by explicitly considering the searcherâs intent as well as the document. This follows observations by Wei et al. [2022] and Kojima et al. [2022] that âchain of thoughtâ or âstep by stepâ prompts can produce more reliable results (something we have also observed, informally, in other work). In some variants, we expanded this to explicitly ask for scores for two aspectsâtopicality and trustâas well as an overall score. In some variants, we also ask the model to simulate several human judges (here five) and give scores from each.
The final part of the prompt specifies an output format and includes a snippet of JSON to encourage correct syntax. This is a âzero-shotâ prompt, in that it does not include any examples of the task. Liang et al. [2022] report remarkably mixed results across tasks and models, so it is certainly possible that we could improve with one or more examples; it is also possible we could see some regression. The length of TREC documents means it is hard to include even one entire example, let alone more, and we leave experimentatino with one- or few-shot prompts as future work. | 2309.10621#11 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 11 | Execute: david_insects = (1/2) * solution[o] lily_insects = david_insects + 7 Lily_insects Interaction Turn #5 Thought: [Apologize ...] Bodhi found b insects. David found half of what ® Python Interpreter Bodhi found, so David found (1/2) * b insects [...] Observation: Out[4]: 16.0000000000000 Execute: from sympy import symbols, Eq, solve You have 1 steps left and 2 chances to propose solution left. b = symbols('b') You should take the last step to propose a solution. # set up the equation equation = Eq((1/2) * b + 7 + (1/2) * b + b, 43) User (optional, LLM-simulated) # solve the equation âThis is GOOD. You have correctly calculated the number of insects Lily? solution = solve(equation, b) i Now you can propose the solution. solution[o] Interaction Turn #3 Propose Solution: The number of insects Lily found is 16. @® python interpreter ° : . Q User Observation: Out[2]: 18.0000000000000 | 2309.10691#11 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 11 | deduplication was only done locally within each dataset. In most current open-source LLM training data [7, 36, 38], only local dedupli- cation is performed within each data source, which neglects the redundancy across the different sources. Given the effects, global deduplication performed in SlimPajama is generally preferable for training large language models, es- pecially when using multi-source datasets. It ensures a balanced representa- tion of information and prevents the pitfalls associated with data redundancy. However, more hardware memory is naturally required by this strategy. Different Combinations of Highly-deduplicated Datasets. A model trained on diverse data is more likely to generalize well across various tasks. Itâs ex- posed to a wider range of vocabulary, syntax, and semantics, enabling it to handle a broad scope of queries. If diverse sources are chosen such that they represent different cultures, beliefs, and demographics, the model might be more balanced and less prone to biases. However, if many sources share com- mon biases, the final dataset might amplify them. Different sources can pro- vide both a breadth and depth of knowledge on various topics. Combining | 2309.10818#11 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 12 | 14.02 - - 9.20 18.29 LLaMA-13B 28.50 LLaMA 2-13B 35.80 Vicuna-13B 32.80 Chinese-Alpaca-Plus-13B 38.80 XVERSE-13B 53.70 Baichuan 1-13B-Base 52.40 58.10 Baichuan 2-13B-Base 46.30 55.09 52.00 43.90 55.21 51.60 59.17 31.15 37.99 36.28 33.43 58.44 55.30 61.97 28.23 30.83 30.11 34.78 44.69 49.69 54.33 28.22 32.29 31.55 35.46 42.54 43.20 48.17 37.89 46.98 43.04 28.94 38.06 43.01 48.78 20.55 28.89 28.13 11.98 18.20 26.76 52.77 15.24 15.24 16.46 16.46 15.85 11.59 17.07 | 2309.10305#12 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 12 | Note that we do not claim that this is the best prompt, or the best prompt format; indeed, in Section 4.4 we will see that even minor paraphrases can make a material difference. Our interest here is in the range of results we see with a reasonable prompt (as opposed to the minimal prompts of Faggioli et al. [2023] or Liang et al. [2022]), in the practical impact of disagreements, and in which features of a prompt seem to help or hinder LLM accuracy.
# 2.3 Variations
We varied the prompt in four ways:
Describing the role The simplest version of our instructions asks for a score for a query and a web page. Web page quality is a complex notion, but search providers frequently publish hints of what they are looking for. In particular, Googleâs labelling guidelines use the phrase âsearch quality raterâ [Google LLC 2022]. Some prompts therefore include the phrase âyou are a search quality rater evaluating the relevance of web pagesâ, as a shorthand way to reference both the guidelines (which are generally useful) and surrounding discussion. | 2309.10621#12 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10305 | 13 | # 7B
13B
Table 1: Overall results of Baichuan 2 compared with other similarly sized LLMs on general benchmarks. * denotes results derived from official websites.
4 3 F & 2 Sg g& % 4 2 me sg s %⢠6% 2 Ee 3 5 âwy % % %93 8a 2 ey %, â, 2 9 Ge Ss Cf Himany itiog Mass media Histor Religion
# 2.2 Architecture
The model architecture of Baichuan 2 is based on the prevailing Transformer (Vaswani et al., 2017). Nevertheless, we made several modifications which we detailed below.
# 2.3 Tokenizer
A tokenizer needs to balance two critical factors: a high compression rate for efficient inference, and an appropriately sized vocabulary to ensure adequate training of each word embedding. We have taken both these aspects into account. We have expanded the vocabulary size from 64,000 in Baichuan 1 to 125,696, aiming to strike a balance between computational efficiency and model performance.
Figure 1: The distribution of different categories of Baichuan 2 training data. | 2309.10305#13 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 13 | Varying topical description Queries alone are an impoverished representation of an information need, but TREC topics have additional text describing what the query means (description) and which documents should be considered responsive (narrative). For example, for the query hubble telescope achievements, the description restates that the query is about achievements of the space telescope since its launch in 1991, and the narrative clarifies that this is about scientific achievement so results that only talk about shortcomings and repairs would not be considered relevant. In some prompts, we include this text as the âdescriptionâ and ânarrativeâ fields.
Varying aspects A straightforward approach, following the TREC guidelines, would be to ask for an overall label for each query : document pair. In past work with human labelling, we have found it more useful to spell out several aspects, and ask for ratings against these, before asking for an overall label. These extra questions have been useful to help anchor judge assessments, without constraining the final label (i.e. the overall label need not be a simple average of the per-aspect labels). Similarly, with large language models there has been demonstrated success with splitting problems into steps with prompts such as âthink step by stepâ [Kojima et al. 2022].
Large language models can accurately predict searcher preferences
# role | 2309.10621#13 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10818 | 13 | 3
work, we aim to shed light on this fascinating perspective of comprehensive data combination on SlimPajama. Specialization vs. Generalization Trade-off. In general, combining many spe- cialized datasets can lead to a jack-of-all-trades model, which might not be as adept at specific tasks as a model trained on a specialized dataset. While the model can tackle a wide range of tasks, it might not have the depth of un- derstanding that a specialized model might have for a particular domain. In this study, we also explore specialization and generalization ability using both individual and combined data sources.
The remainder of this paper is organized as follows. In Section 2, we elabo- rate the details of dataset statistics, token distributions, and data processing procedure. Section 3 describes dataset combination configurations for this SlimPajama-DC study. Our model architecture and training details are pro- vided in Section 4, followed by the results and analysis in Section 5 on the range of various tasks in the zero- and few-shot settings. Section 6 presents an application of efficient Large Batch-size (LBS) training on a 7B model. Section 7 reviews related work and Section 8 concludes this study.
# 2 Dataset Overview
# 2.1 Number of Tokens | 2309.10818#13 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 14 | Figure 1: The distribution of different categories of Baichuan 2 training data.
Data processing: For data processing, we focus on data frequency and quality. Data frequency relies on clustering and deduplication. We built a large-scale deduplication and clustering system supporting both LSH-like features and dense embedding features. This system can cluster and deduplicate trillion-scale data within hours. Based on the clustering, individual documents, paragraphs, and sentences are deduplicated and scored. Those scores are then used for data sampling in pre-training. The size of the training data at different stages of data processing is shown in Figure 2.
Tokenizer LLaMA 2 Bloom ChatGLM 2 Baichuan 1 Baichuan 2 Vocab Size Compression Rate â 32,000 250,680 64,794 64,000 125,696 1.037 0.501 0.527 0.570 0.498
Table 2: The vocab size and text compression rate of Baichuan 2âs tokenizer compared with other models. The lower the better.
We use byte-pair encoding (BPE) (Shibata et al., 1999) from SentencePiece (Kudo and Richardson, 2018) to tokenize the data. Specifically, we do not apply any normalization to the input text and we | 2309.10305#14 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 14 | Large language models can accurately predict searcher preferences
# role
You are a search quality rater evaluating the relevance of web pages. Given a query and a web page, you must provide a score on an integer scale of 0 to 2 with the following meanings:
2 = highly relevant, very helpful for this query 1 = relevant, may be partly helpful but might contain other irrelevant content 0 = not relevant, should never be shown for this query
Assume that you are writing a report on the subject of the topic. If you would use any of the information contained in the web page in such a report, mark it 1. If the web page is primarily about the topic, or contains vital information about the topic, mark it 2. Otherwise, mark it 0.
description, narrative
Query A person has typed [query] into a search engine. They were looking for: description narrative
Result Consider the following web page.
âBEGIN WEB PAGE CONTENTâ
page text
âEND WEB PAGE CONTENTâ
Instructions Split this problem into steps:
Consider the underlying intent of the search.
aspects Measure how well the content matches a likely intent of the query (M).
aspects Measure how trustworthy the web page is (T).
Consider the aspects above and the relative importance of each, and decide on a final score (O).
We asked five search engine raters to evaluate the relevance of the web page for the query. Each rater used their own independent judgement. | 2309.10621#14 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 14 | other hand, augmenting LLMsâ with tools can effectively improve LLMsâ task-solving capabilities (Mialon et al., 2023), suggesting the importance of LLM-Tool interaction. We instruct the LLM (§F.4.1) to perform the following steps in each turn: (1) optionally express its reasoning process (âThought:â in Fig. 1, similar to Yao et al. (2022)); (2) then either interact with tools by generating Python code and executing it through a Python interpreter (âExecute:â in Fig. 1), or proposing a solution to the user (âPropose Solution:â in Fig. 1). In our implementation, the model is instructed to wrap their âExecuteâ and âPropose Solutionâ actions with pairs of <execute> and <solution> tags for ease of parsing. We standardize the prompts and in-context examples for different LLM variants (base vs. chat) and for task-solving and feedback providing, aiming for fair and reproducible comparisons (Appendix §F.4.1, §F.4.2, and §F.5). In what follows, we introduce three settings with increased interaction complexity to measure different aspects of multi-turn interaction. | 2309.10691#14 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 14 | # 2 Dataset Overview
# 2.1 Number of Tokens
SlimPajama has a total of 627B tokens across different domains, as shown in Ta- ble 1. It includes validation and test sets with 500M tokens each, and these have been cleaned to ensure no overlap with the training data. For the SlimPajama- DC study, our entire training dataset for each configuration contains 330B to- kens after tokenization which is carefully selected from the original SlimPa- jama dataset. We tested different sampling strategies for different domains of our training data: (1) each token is trained only once during training, such as Commoncrawl, and (2) we perform more than one epoch for training on partic- ular sources, such as the Wikipedia and Github domains. The detailed domain source proportions of various combinations are shown in Table 3.
SlimPaj. RedPaj. LLaMA-1 RefinedWeb GPT3 MassiveText 52.2% 26.7% 5.2% 4.2% 4.6% 3.8% 3.3% 0.0% 0.0% 0.0% 637B
Table 1: Data source proportions for various datasets.
4
# 2.2 Dataset Token Frequency Statistics | 2309.10818#14 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 15 | Bract Heuristic deduplication approach 70.1% 68.34% 100% im 29.89% Sent-wise quality filter Document-wise deduplication 31.68% 50.81% Sent-wise, paragraph-wise deduplication 65.28% 5. | 3.06% 19.13%
Figure 2: The data processing procedure of Baichuan 2âs pre-training data.
positional embedding hidden size FFN size num heads num layers seq. length max LR RoPE ALiBi 4,096 5,120 11,008 13,696 32 40 32 40 4,096 4,096 2e-4 1.5e-4
Table 3: Model details of Baichuan 2. | 2309.10305#15 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 15 | We asked five search engine raters to evaluate the relevance of the web page for the query. Each rater used their own independent judgement.
Produce a JSON array of scores without providing any reasoning. Example: [{"M": 2, "T": 1, "O": 1}, {"M": 1 . . .
# Results [{
Fig. 1. General form of the prompts used in our TREC Robust experiments. Italicised words are placeholders, filled with appropriate values. Shaded text is optional, included in some prompt variants.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
Inspired by these ideas, in some variants we explicitly ask for labels over aspects of ârelevanceâ as well as for an overall label. For TREC Robust, we ask for labels for topicality (âhow well the content matches a likely intentâânote that this captures likely intents that arenât captured elsewhere) and for trustworthiness (âhow trustworthy the page isâ). There are no further definitions of either aspect. | 2309.10621#15 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 15 | Lazy User-LLM Interaction. We consider the scenario where a user provides an initial instruction and makes minimal efforts to guide the LLM towards the final solution. This will serve as a baseline for subsequent evaluations of LLMâs ability in tool-augmented task-solving and leveraging natural language feedback. The LLM is given two attempts to propose solutions for each problem, with a limit on the number of interaction turns k (§3.1). Upon a proposed solution, MINT simulates users that check the solutionâs correctness with ground truths. When the first attempt is wrong, the user responds to the LLM with âYour answer is wrong.â The interaction ends either after the LLM has made two attempts to propose a solution, or when the solution is verified as correct (5th turn of Fig. 1), or when the k-th turn of interaction is reached. We consider this as the case of Lazy User- LLM Interaction since the simulated user provides at most one additional binary feedback during the entire course of interaction. We follow standard evaluation practice and use established evaluation metrics for each task in §2.2. | 2309.10691#15 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 15 | Table 1: Data source proportions for various datasets.
4
# 2.2 Dataset Token Frequency Statistics
To examine the similarity between various datasets in SlimPajama, we calcu- late the KL divergence between two domain distributions of token counts from different datasets, as shown in Fig. 1a. Given that distinct datasets may em- phasize dissimilar token types, we subsequently delve into the differences in the distribution of these datasets across token subsets exhibiting distinct char- acteristics: (1) Tokens exclusively comprising letters (Fig. 1b); (2) The union set of tokens with the top 1000 frequencies on each dataset (Fig. 1c); (3) Numbers and commonly used operators, like â30â, â+â and â=â (Fig. 1d); (4) Whitespace Tokens, like â
â and â â (Fig. 1e); (5) Non-alphanumeric tokens, like â#â and â====â (Fig. 1f). | 2309.10818#15 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 16 | Table 3: Model details of Baichuan 2.
do not add a dummy prefix as in Baichuan 1. We split numbers into individual digits to better encode numeric data. To handle code data containing extra whitespaces, we add whitespace-only tokens to the tokenizer. The character coverage is set to 0.9999, with rare characters falling back to UTF-8 bytes. We set the maximum token length to 32 to account for long Chinese phrases. The training data for the Baichuan 2 tokenizer comes from the Baichuan 2 pre-training corpus, with more sampled code examples and academic papers to improve coverage (Taylor et al., 2022). Table 2 shows a detailed comparison of Baichuan 2âs tokenizer with others.
# 2.3.1 Positional Embeddings
To enable further research on bias-based and multiplication-based attention, we apply RoPE on Baichuan 2-7B and ALiBi on Baichuan 2-13B, consistent with Baichuan 1.
# 2.4 Activations and Normalizations | 2309.10305#16 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 16 | Varying number of âjudgesâ People naturally vary in their labels, and aggregating several labels for each result can reduce noise and increase sensitivity due to law of large numbers. In some prompts we ask the model to simulate several judges, generating the output of five simulated judges from one LLM call. Since the outputs are generated in sequence they are not really independent labellers, but we previously found it useful to generate and aggregate multiple labels in this way, so we include it as a prompt variant here.
# 3 EVALUATING THE LABELS, EVALUATING THE LABELLERS
How are we to choose between labels, or rather between labelling processes? The main criterion is validity, in particular that labels from any new source should agree with gold labels [Faggioli et al. 2023]. We can measure this in two ways: by looking at the labels themselves or by looking at preferences between documents. Additionally, labels are typically aggregated to derive query-level or system-level scores, and we may care whether machine labels would lead to similar conclusions at these aggregated levels.
Further criteria include cost, in both dollars and time; throughput; and how easily we can measure new types of result, such as results in different languages or different media types.
# 3.1 Document labels | 2309.10621#16 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 16 | LLM-Tool Interaction with Lazy User-LLM Interaction. Under the lazy User-LLM interaction setting, we measure the LLMâs ability to solve tasks using tools by comparing their task-solving success rate across different interaction limits k. For each turn, the LLM can choose to interact with
3
Preprint.
Table 1: Dataset statistics of re-purposed data instances from existing datasets into MINT. We filter and down-sample existing datasets to construct a compact set of complex tasks that require multi- turn interaction to solve (§2.2).
Task Type Task Name Original Size Reduced Size in MINT Code Generation HumanEval (Chen et al., 2021) MBPP (Austin et al., 2021) 164 500 Decision Making ALFWorld (Shridhar et al., 2020) 134 Reasoning GSM8K (Cobbe et al., 2021) HotpotQA (Yang et al., 2018) MATH (Hendrycks et al., 2021) MMLU (Hendrycks et al., 2020) TheoremQA (Chen et al., 2023) 1319 7,405 5,000 13,985 800 Total 29,307 45 91 134 48 43 100 76 49 586 | 2309.10691#16 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 16 | There exists a degree of similarity in the distribution of different token sub- sets among RefinedWeb, Book, C4, and CommonCrawl, as well as between Github and StackExchange. Notably, when it comes to the distribution of non- alphanumeric tokens, Arxiv differs significantly from most datasets. While on the distribution of whitespace tokens, Refinedweb shows notable distinctions in comparison to Github and StackExchange. Among numbers and commonly used operators, the distribution of all datasets is relatively consistent.
# 2.3 Dataset Processing Procedure
SlimPajama was created by filtering low-length documents and applying Min- HashLSH deduplication to the 1.2T token RedPajama dataset to reduce it to 627B tokens. RefinedWeb [27] shows that training on deduplicated data im- proves training compute efficiency and decreases the chance of LLMs gen- erating memorized text from the dataset. By removing duplicate and low- length examples, it ultimately improves the training compute efficiency and model performance. The overview of SlimPajama preprocessing pipeline is shown in Fig. 2 and the preprocessing code is under https://github.com/ Cerebras/modelzoo. | 2309.10818#16 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 17 | # 2.4 Activations and Normalizations
We use SwiGLU (Shazeer, 2020) activation function, a switch-activated variant of GLU (Dauphin et al., 2017) which shows improved results. However, SwiGLU has a âbilinearâ layer and contains three parameter matrices, differing from the vanilla Transformerâs feed-forward layer that has two matrices, so we reduce the hidden size from 4 times the hidden size to 8 3 hidden size and rounded to the multiply of 128.
Building on Baichuan 1, we adopt Rotary Positional Embedding (RoPE) (Su et al., 2021) for Baichuan 2-7B and ALiBi (Press et al., 2021) for Baichuan 2-13B. ALiBi is a more recent positional encoding technique that has shown improved extrapolation performance. However, most open-sourced models use RoPE for positional embeddings, and optimized attention implementations like Flash Attention (Dao et al., 2022; Dao, 2023) are currently better suited to RoPE since it is multiplication-based, bypassing the need for passing attention_mask to the attention operation. Nevertheless, in preliminary experiments, the choice of positional embedding did not significantly impact model performance. | 2309.10305#17 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 17 | # 3.1 Document labels
The simplest way to evaluate a machine labelling process is to ask: does it produce the same labels as would human labellers? Evidently, if the labels are the same for any document, then the machine process can be directly substituted without any quality concerns.
We can summarise differences between the machine and human labels with a confusion matrix. The labels are on an ordinal scale (not an interval scale), but if we assign scores 0 and 1 to the two levels then we can further compute the mean difference between the human and machine labels. In what follows we report accuracy with the mean absolute error (MAE), where 0 means the two sources always agree on labels and 1 means they are maximally different.
In an earlier study, Faggioli et al. [2023] report Cohenâs ð
between TREC assessors and GPT-3.5 and YouChat LLMs, and we report ð
here as well. ð
ranges from 1 (complete agreement) through 0 (agreement only by chance) to â1 (complete disagreement). For direct comparison with Faggioli et al. we report ð
over binarised labels, where partly- and highly-relevant are conflated.
# 3.2 Document preference | 2309.10621#17 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 17 | tools (generate code to call equation-solver in Fig. 1) or propose a solution (5th turn in Fig. 1). To keep the LLM from getting stuck in an infinite loop of tool-calling without proposing a solution, MINT reminds the LLM: âYou have X steps left and Y chances to propose solution left,â and pro- vides an additional instruction at the last turn: âYou should take the last step to propose a solution.â Intuitively, with more interaction with tools, the LLM can get more useful observations through the Python interpreter (e.g., calculation results, error messages). We vary k â {1, 2, 3, 4, 5} and com- pare the modelsâ success rate with each k. We consider LLMâs performance gain w.r.t. k and the absolute performance at k = 5 as their tool-augmented task-solving ability (§3.2). | 2309.10691#17 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 17 | Data source Commoncrawl C4 GitHub Books ArXiv Wikipedia StackExchange Total 0.02% 4.7% 0.0% 0.0% 0.62% 0.0% 0.32% 1.86% 63.76% 6.85% 46.16% 2.01% 0.06% 2.24% 0.20% 49.60%
# Document filter rate Byte duplication rate
Table 2: Document low-length filter rates and data source byte duplication rates.
5
slimp). CommosemPl;- 0.00 0.08 0.07 0.22 Slimpj.C4- 0.08 0.00 0.04 0.23 RefinedWeb - 0.05 0.03 0.00 0.21 Slimpj.Book - 0.25 Slimp). StackExchange Slimpj.Github Slimpj.Wikipedia Slimpj.Arxiv 3.40 2.69 10 6 2 | 2309.10818#17 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 18 | For the attention layer of Baichuan 2, we adopt the memory efficient attention (Rabe and Staats, 2021) implemented by xFormers2. By leveraging xFormersâ optimized attention with biasing capabilities, we can efficiently incorporate ALiBiâs bias-based positional encoding while reducing memory overhead. This provides performance and efficiency benefits for Baichuan 2âs large-scale training.
We apply Layer Normalization (Ba et al., 2016) to the input of the Transformer block which is more robust to the warm-up schedule (Xiong et al., 2020). In addition, we use the RMSNorm implementation
# 2https://github.com/facebookresearch/
xformers
introduced by (Zhang and Sennrich, 2019), which only calculates the variance of input features to improve efficiency.
# 2.5 Optimizations
We use AdamW (Loshchilov and Hutter, 2017) optimizer for training. β1 and β2 are set to 0.9 and 0.95, respectively. We use weight decay with 0.1 and clip the grad norm to 0.5. The models are warmed up with 2,000 linear scaling steps reaching to the max learning rate and then applying the cosine decay to the minimum learning rate. The parameter details and learning rate are shown in Table 3. | 2309.10305#18 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 18 | # 3.2 Document preference
Minimising document-level MAE gives us scores which are calibrated across queries, and interpretable for debugging and development. Ranking, however, can use preferences between documents rather than calibrated scores; this is also sufficient for many learning-to-rank algorithms [Liu 2009]. On this view it is the relative ordering of any two documents that is important, and we can measure this with pairwise accuracy or AUC: the chance that, given any two documents with a human preference, the modelâs preference is the same. A score of 1 means the modelâs preferences are always the same as the humanâs, a score of 0 means they always disagree, and a score of 0.5 is chance alone.
Large language models can accurately predict searcher preferences
(a) Preferences only within each topic (b) Preferences across topics
Fig. 2. Options for document preference. If we form preferences only within each topic, there is no constraint on how, for example, âbetter 1aâ is scored relative to âworse 2aâ: labels can vary per topic. If we form preferences across topics, we add the constraint that âbetter 1aâ should score higher than âworse 2aâ, so labels are consistent. We also generate many more pairs. | 2309.10621#18 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 18 | Informative User-LLM Interaction with Language Feedback. Beyond lazy User-LLM interac- tion, we investigate how the LLM performs when the user mirrors a patient teacher who provides useful suggestions. However, collecting human language feedback for LLM evaluation presents reproducibility challenges due to inconsistent standards and can be costly, particularly for open- source communities with relatively fewer resources5. To address these issues, we prompt GPT-4 (§F.4.2) to simulate user language feedback (dotted boxes in Fig. 1). We validate the effectiveness of GPT-4 feedback in a human evaluation (§3.6). We compare the performance between (1) simu- lated language feedback and (2) lazy user-LLM interaction, both in the setting of tool-augmented interaction with an interaction limit k = 5. We consider performance (absolute) and improvements from language feedback as LLMâs ability to leverage natural language feedback.
2.2 REPURPOSING EXISTING DATASET FOR MINT | 2309.10691#18 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 18 | slimpj Commonnpi;- 0.00 0.08 0.05 0.19 Slimpj.C4- 0.08 0.00 0.02 0.20 RefinedWeb - 0.05 0.03 0.00 0.18 Slimp) Book - 0.22 slimp) StackExchange 0.00 0.48 0.40 0.00 Slimpj.Github Slimpj. Wikipedia Slimpj.Arxiv yp es ee wer & & es os aE SF FF SEF ES see FF ⬠SES SS °f LS OTL SS ec Cs Ss
(a) All Tokens
(b) Tokens Composed of Letters
slimpi âCommonCrawi ~ Oe 0.06 0.12 Slimpj.C4- 0.05 0.00 0.05 0.17 RefinedWeb - 0.03 0.02 0.00 0.13 Slimpj.Book - slimp) StackExchange Slimpj.Github Slimp). Wikipedia Slimpj.Arxiv be 2S + Se eas ss § & ye S&F aE FE @SE S SSF § SF SES SS as CS TES &* Ss se gS s | 2309.10818#18 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 19 | The whole models are trained using BFloat16 mixed precision. Compared to Float16, BFloat16 has a better dynamic range, making it more robust to large values that are critical in training large language models. However, BFloat16âs low precision causes issues in some settings. For instance, in some public RoPE and ALibi implementations, the torch.arange operation fails due to collisions when the integer exceeds 256, preventing differentiation of nearby positions. Therefore, we use full precision for some value- sensitive operations such as positional embeddings.
NormHead: To stabilize training and improve the model performance, we normalize the output embeddings (which are also referred as âheadâ). There are two advantages of NormHead in our experiment. First, in our preliminary experiments we found that the norm of the head are prone to be unstable. The norm of the rare tokenâs embedding becomes smaller during training which disturb the training dynamics. NormHead can stabilize the dynamics significantly. Second, we found that the semantic information is mainly encoded by the cosine similarity of Embedding rather than L2 distance. Since the current linear classifier computes logits by dot product, which is a mixture of L2 distance and cosine similarity. NormHead alleviates the distraction of L2 distance in computing logits. For more details, please refer appendix B. | 2309.10305#19 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 19 | (Another consideration is that two scoring schemes may differ in scale and location: for example, one source may give scores 1â5 while another gives 1â10 or 0-99. MAE in this case is misleading, even if there is a completely consistent mapping from one source to another. Pairwise preferences are robust to this sort of difference.)
There are two ways to form pairs of documents (Figure 2). If we choose pairs of documents only from the same topic, we can use a topic-dependent labelling scale: the worse document for one topic might still be better than the better document from another,for example if one topic is especially hard. The set of pairs will also be smaller. Choosing pairs of documents from all topics, that is, from all documents ever labelled, enforces a query-independent scale as the âbetterâ document from one query should score higher than the âworseâ document from any other. The set of pairs formed this way will also be bigger. In our evaluation, we choose the second approach; in other circumstances, the flexibility of per-topic ordering might be preferable.
# 3.3 Query ordering | 2309.10621#19 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 19 | 2.2 REPURPOSING EXISTING DATASET FOR MINT
Evaluating LLMs in multi-turn interaction can be costly due to the need for iterative inference. For instance, HotpotQA (Yang et al., 2018) has 7,405 test examples. Evaluation with five turns requires at least 7,405 à 5 = 37K LLM inference runs. Previous methods (Yao et al., 2022; Shinn et al., 2023) choose to evaluate on randomly drawn test examples, hindering fair performance comparisons. We select diverse tasks from established datasets that requires multi-turn interaction to solve while also maintaining the selected subset compact for accessible evaluation. The following paragraph describes our three-step approach to repurposing datasets for MINT. We provide dataset sources and statistics in Tab. 1. For more details, please refer to §D in Appendix.
Collecting and Re-purposing Data from Diverse Sources. Our primary goal is to create a com- prehensive evaluation covering tasks that benefit from interaction. We choose three types of tasks:
including math reasoning (GSM8K, MATH, TheoremQA), multi-hop question answering (HotpotQA), and knowledge problem-solving (MMLU). We implicitly filter out knowledge-intensive questions that do not require multi-step reasoning in the next step. | 2309.10691#19 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 19 | Slip) âCommonCrawi ~ °° 0.08 0.03 0.19 Slimpj.C4- 0.07 0.00 0.04 0.08 RefinedWeb - 0.03 0.04 0.00 0.13 Slimpj.Book - 0.13 0.07 0.10 0.00 slimp) StackExchange Slimpj.Github Slimp).Wikipedia - 0.14 Slimp).Arxiv
(c) Top 1000 Tokens
(d) Numbers and Commonly Used Operators
sli commosliâ¢?l;- 0.00 0.29 oxo slimpj.c4- 025 0.00 038 Renecwed 077 0.19 Slimpj.Book - 0.37 slimp) StackExchange Slimpj.Github Slimp).Wikipedia - 0.25 0.20 SlimpjArxiv - 0.11 0.40 0.00 â â be 2 + RS o 3 ol ey Ss ge S&F as ££ aSF SF SSF IF EF SES SS ef LS FES SS eo & s | 2309.10818#19 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 20 | Max-z loss: During training, we found that the logits of LLMs could become very large. While the softmax function is agnostic to the absolute logit values, as it depends only on their relative values. Large logits caused issues during inference because common implementations of repetition
penalty (such as the Hugging Face implementation3 in model.generate) apply a scalar (e.g. 1.1 or 1.2) directly to the logits. Contracting very large logits in this way can significantly alter the probabilities after softmax, making the model sensitive to the choice of repetition penalty hyper- parameter. Inspired by NormSoftmax (Jiang et al., 2023b) and the auxiliary z-loss from PaLM (Chowdhery et al., 2022), we added a max-z loss to normalize the logits:
Lmax-z = 2eâ4 â z2 (1)
where z is the maximum logit value. This helped stabilize training and made the inference more robust to hyper-parameters.
2.6 â Baichuan 2-7B 25 â Baichuan 2-138 0 500 1000 1500 2000 2500 billion tokens
Figure 3: The pre-training loss of Baichuan 2.
The final training loss of Baichuan 2-7B and Baichuan 2-13B are shown in Figure 3.
# 2.6 Scaling Laws | 2309.10305#20 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 20 | # 3.3 Query ordering
Our primary interest is in generating (and evaluating) labels for documents. However, past work has shown that errors in document labels can be washed out when labels are aggregated to query-level or system-level scores [Bailey et al. 2008]. It is certainly possible that differences in labels are not relevant to query- or system-level evaluations.
In consideration of this we can also order result lists (SERPs) by some metric (e.g. RBP or MAP), according to the labels produced by humans and with regard to some fixed search engine; order the same result lists, on the same metric, according to the labels produced by a model; and ask how similar the two orderings are.
With this query-level analysis we are likely to be looking for queries which do badly (i.e. where a system scores close to zero), so here we measure correlation with rank-biased overlap (RBO) [Webber et al. 2010] after sorting the queries from lowest to highest scores. This means that (dis)agreements about which queries score worstâwhich queries we want to investigateâcount for more than (dis)agreements about those queries that score well.
In our case, since the two rankings are permutations, there is a well-defined lower bound2 for RBO:
(1-4) by (6712 - k)/d) d=|N/2|+1 | 2309.10621#20 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 20 | Code generation, including HumanEval and MBPP. ⢠Decision-making tasks in ALFWorld, an embodied household simulator with a text-only interface
based on TextWorld (CËot´e et al., 2018).
5Based on our human evaluation (§3.6, §B), we estimate annotators, on average, take 96 seconds to provide language feedback for one turn, which translates to 90 USD per 100 feedback with hourly wage of US workers.
4
Preprint.
From eight datasets, we create an initial test set of 29,307 instances. All instances are initially designed for single-round evaluation without interaction, except for decision-making (ALFWorld). Similarly to Yao et al. (2022); Gao et al. (2023), we adapt reasoning tasks into multi-turn interaction tasks by augmented LLM with tools for problem-solving (§F.5.3). Through in-context prompting (§F.5.2), we encourage LLMs to use the Python interpreter to test their generated code on the pro- vided public test suite for code generation problems before committing to a solution. | 2309.10691#20 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 20 | Slimp) âCommonCrawi ~ °-°° 0.08 0.08 0.20 Slimpj.C4- 0.07 0.00 0.06 0.21 RefinedWeb - 0.07 0.08 0.00 0.30 Slimpj-Book - 0.30 0.37 0.49 0.00 slimp) StackExchange 0.00 0.20 0.18 0.00 Slimpj.Github Slimp).Wikipedia Slimp).Arxiv
(e) Whitespace Tokens (f) Non-Alphanumeric Tokens
Figure 1: Confusion matrix using KL divergence between the distributions of token statistics for different datasets.
6
oH Nec |+{ Clean Books |-[ nrc |-[ clean Global Dedup H }4] interleave Docs |) Shuffle Docs) Train/Holdout [+ Github |] nec |-] clean Deduplication Train/Holdout Anv L{ nrc |-{ clean Upsamplel omens |-| Same |-{ aot |: with weights racking a Train Holdout âSequence Oe Test up Eval
Figure 2: SlimPajama preprocessing pipeline.
# 2.3.1 Low-length Document Filtering | 2309.10818#20 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 21 | The final training loss of Baichuan 2-7B and Baichuan 2-13B are shown in Figure 3.
# 2.6 Scaling Laws
Neural scaling laws, where the error decreases as a power function of training set size, model size, or both, have enabled an assuring performance when training became more and more expensive in deep learning and large language models. Before training the large language models of billions of parameters, we first train some small-sized models and fit a scaling law for training larger models.
We launched a range of model sizes going from 10M to 3B, ranging from 1 10 the size of the final model, and each of the model is trained for up to 1 trillion tokens, using consistent hyper- parameters and the same data set sourced from Baichuan 2. Based on the final loss of different
3https://huggingface.co/transformers/ v4.1.1/_modules/transformers/generation_ logits_process.html
models, we can obtain a mapping from the training flops to the target loss.
Model Losses 2 â 10M Model 50M Model 100M Model 300M Model 800M Model - 1,58 Mode! g â 138 Model 1 7" To 13 Te 1 1 Log FLOPs | 2309.10305#21 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 21 | (1-4) by (6712 - k)/d) d=|N/2|+1
with ð queries and a discount parameter ð. For ease of interpretation we use this minimum to normalise RBO scores into the range 0 to 1, so 0 is an exactly opposite ranking and 1 is an identical ranking. We use set ð = 0.9, corresponding to an experimenter looking (on average) at the first ten queries.
# 2Alistair Moffat, personal communication, July 2023.
Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra
# 3.4 System ordering
The primary use of query:document scores is of course to score a whole system, first by accumulating document scores to query scores then by accumulating query scores to system scores. To see the effect of disagreements between our human and LLM judges, we report RBO over those systems which ran the same queries. Again, since there are a fixed set of systems, we can calculate the minimum RBO score and normalise. An experimenter might look seriously at the top three or four systems, so we set ð = 0.7.
# 3.5 Ground-truth preferences between results
An alternative view is that, since human-assigned labels may themselves be biased or noisy, labels should instead accurately predict real searcher preferences. | 2309.10621#21 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 21 | Keeping Instances that Require Multi-turn Interaction. To better answer our research question âhow LLM benefits from multi-turn interaction,â we only keep instances that are challenging and require multi-turn interaction. Since we allow LLM to propose solutions more than once, we filter out instances that a random guess baseline can do well, e.g., multiple-choice instances with < 4 options. We then run gpt-3.5-turbo-0613 (OpenAI API) on the initial dataset and exclude instances finished within two turns (e.g., easy problems that can be solved without multi-turn).
Stratified Sub-Sampling for Efficient Evaluation. We use stratified sampling to create a compact and representative set of 586 examples, ensuring that the ratio of correct to incorrect examples in the resulting set mirrors that of the original data to balance the difficulty of the resulting samples.
# 3 EXPERIMENTS
3.1 SETUP | 2309.10691#21 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 21 | Figure 2: SlimPajama preprocessing pipeline.
# 2.3.1 Low-length Document Filtering
Additional global filtering is performed to remove short, low-quality docu- ments. After removing punctuation, consecutive spaces, newlines, tabs, and leading or trailing escape characters, documents with less than 200 characters were further filtered out. These documents typically contain only metadata and no useful information. A low-length filter was applied to every corpora other than Books and GitHub where it was found useful for short documents. The percentage of documents filtered out from each corpus within the SlimPa- jama dataset is detailed in Table 2. In total, this additional step removed 1.86% of the documents.
# 2.3.2 Global Deduplication | 2309.10818#21 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 22 | Figure 4: The scaling law of Baichuan 2. We trained various models ranging from 10 million to 3 billion parameters with 1 trillion tokens. By fitting a power law term to the losses given training flops, we predicted losses for training Baichuan 2-7B and Baichuan 2-13B on 2.6 trillion tokens. This fitting process precisely predicted the final modelsâ losses (marked with two stars).
To fit the scaling law of the model, we employed the formula given by Henighan et al. (2020):
LC = a à Cb + Lâ (2)
where Lâ is the irreducible loss and the first term is the reducible loss which is formulated as a power-law scaling term. C are training flops and the LC are final loss of the model in that flops. We used the curve_fit function from the SciPy4 library to fit the parameters. The final fitted scaling curve and the predicted 7 billion and 13 billion parameters modelâs final loss are shown in Figure 4. We can see that the fitted scaling law predicted Baichuan 2âs final loss with high accuracy.
# 2.7 Infrastructure
Efficiently leveraging existing GPU resources plays a critically important role in training and developing large language models today. To accomplish this, we develop a co-design approach for an elastic training framework and a smart cluster scheduling policy. | 2309.10305#22 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
2309.10621 | 22 | # 3.5 Ground-truth preferences between results
An alternative view is that, since human-assigned labels may themselves be biased or noisy, labels should instead accurately predict real searcher preferences.
Evaluating machine labels by their agreement with human labels is useful, because in many situations we can use a large corpus of existing labels. However, it does not speak to the validity of the labels: that is, whether the labels (or a metric derived from the labels) reflects some true searcher experience. If machine labels agree with human labels to (e.g.) 80%, then the 20% disagreement might be a fault with the machine, or poor-quality labels from the humans, or some combination. We expand on this idea in Section 5.
# 3.6 Other criteria
Besides the above, we can imagine other criteria for choosing a labelling process. These might include cost per label; time, per label or end-to-end; reliability; scalability; difficulty of adapting to new languages, markets, or evaluations; and ease of debugging the labelling process. We do not address these criteria here, but in our experience labelling with LLMs is superior to labelling by crowd workers on all these grounds and is superior to labelling by experts (employees or specially-qualified workers) on all grounds except debuggability.
# 4 RESULTS | 2309.10621#22 | Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | http://arxiv.org/pdf/2309.10621 | Paul Thomas, Seth Spielman, Nick Craswell, Bhaskar Mitra | cs.IR, cs.AI, cs.CL, cs.LG | null | null | cs.IR | 20230919 | 20230919 | [
{
"id": "2305.03495"
},
{
"id": "2211.01910"
},
{
"id": "2308.12885"
},
{
"id": "2304.06588"
},
{
"id": "2108.07258"
},
{
"id": "2309.03409"
},
{
"id": "2306.04751"
},
{
"id": "2303.15056"
},
{
"id": "2211.09110"
},
{
"id": "2307.02179"
},
{
"id": "2104.10350"
},
{
"id": "2211.11890"
},
{
"id": "2201.11903"
},
{
"id": "2304.09161"
},
{
"id": "2303.08774"
}
] |
2309.10691 | 22 | Evaluated LLMs. To comprehensively measure multi-turn interaction capability and identify the potential gap between open- and closed-source LLMs, we evaluate 4 closed- and 16 open-source LLMs. We cover different sizes and training techniques to better understand how they affect LLMsâ multi-turn interaction capability. Training techniques lead to three model variants: pre- trained (base) models, supervised instruction fine-tuned (SIFT, Wei et al., 2022) models, and mod- els trained with reinforcement learning from human feedback (RLHF, Ouyang et al., 2022a). For closed-source models, we evaluate popular commercial LLMs, including gpt-3.5-turbo-0613 from OpenAI API; claude-instant-1, claude-2 from Anthropic Claude API6; Bard chat-bison-001 from Bard API. For open-source LLMs, we evaluate the LLaMA-2 model family (7B, 13B, 70B) (Touvron et al., 2023), including base and chat (RLHF); Vicuna-v1.5 (7B, 13B) (Zheng et al., 2023), a SIFT model fine-tuned on multi-turn conversations | 2309.10691#22 | MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | http://arxiv.org/pdf/2309.10691 | Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji | cs.CL, cs.AI, cs.LG | Code is available on our project website:
https://xingyaoww.github.io/mint-bench | null | cs.CL | 20230919 | 20231012 | [
{
"id": "2308.12950"
},
{
"id": "2110.14168"
},
{
"id": "2306.14898"
},
{
"id": "2306.06070"
},
{
"id": "2107.03374"
},
{
"id": "2307.16789"
},
{
"id": "2304.08354"
},
{
"id": "2108.07732"
},
{
"id": "2302.07842"
},
{
"id": "2303.11366"
},
{
"id": "1809.09600"
},
{
"id": "2305.10314"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2305.12524"
},
{
"id": "2009.03300"
},
{
"id": "2010.03768"
},
{
"id": "2308.10855"
}
] |
2309.10818 | 22 | # 2.3.2 Global Deduplication
When building SlimPajama, it is observed that every corpus included in it contained duplicates with the most significant duplication found in Common- Crawl and GitHub. RefinedWeb [27] also found similar rates of deduplica- tion in the CommonCrawl data. It is most common to perform deduplication within each dataset source separately [36, 7, 42, 13] to reduce implementation complexity and meet resource constraints. This local deduplication approach does not have the ability to remove overlap between data sources which can be significant for web-scraped data. Instead, global deduplication removes du- plication within and between each data source. Following [4, 27, 1, 31], global- level deduplication is performed using MinHashLSH algorithm. To facilitate global deduplication efforts and reproducibility for other researchers, a tool designed for scalable performance is offered under the above link.
Specifically, global MinHashLSH deduplication is performed using a Jac- card similarity threshold of 0.8, document signatures constructed with prepro- cessed lowercase 13-grams, and schema following [22]. To unify a representa- tion of the same content, punctuation, consecutive spaces, newlines, tabs, and leading or trailing escape characters are removed. The level of deduplication
7 | 2309.10818#22 | SlimPajama-DC: Understanding Data Combinations for LLM Training | This paper aims to understand the impacts of various data combinations (e.g.,
web text, wikipedia, github, books) on the training of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive 1.2T tokens RedPajama dataset contributed by Together. We've
termed our research as SlimPajama-DC, an empirical analysis designed to uncover
fundamental characteristics and best practices associated with employing
SlimPajama in the training of large language models. During our research with
SlimPajama, two pivotal observations emerged: (1) Global deduplication vs.
local deduplication. We analyze and discuss how global (across different
sources of datasets) and local (within the single source of dataset)
deduplications affect the performance of trained models. (2) Proportions of
high-quality/highly-deduplicated multi-source datasets in the combination. To
study this, we construct six configurations of SlimPajama dataset and train
individual ones using 1.3B Cerebras-GPT model with Alibi and SwiGLU. Our best
configuration outperforms the 1.3B model trained on RedPajama using the same
number of training tokens by a significant margin. All our 1.3B models are
trained on Cerebras 16$\times$ CS-2 cluster with a total of 80 PFLOP/s in bf16
mixed precision. We further extend our discoveries (such as increasing data
diversity is crucial after global deduplication) on a 7B model with large
batch-size training. Our models and the separate SlimPajama-DC datasets are
available at: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B. | http://arxiv.org/pdf/2309.10818 | Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Zhengzhong Liu, Hongyi Wang, Bowen Tan, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing | cs.CL, cs.AI | Technical report. Huggingface: https://huggingface.co/MBZUAI-LLM and
https://huggingface.co/datasets/cerebras/SlimPajama-627B | null | cs.CL | 20230919 | 20231009 | [
{
"id": "2302.13971"
},
{
"id": "2101.00027"
},
{
"id": "1609.04836"
},
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "1711.05101"
},
{
"id": "2304.03208"
},
{
"id": "2108.12409"
},
{
"id": "2002.05202"
},
{
"id": "2205.01068"
},
{
"id": "2204.06745"
},
{
"id": "2305.10429"
},
{
"id": "2302.03169"
},
{
"id": "2004.10964"
},
{
"id": "2112.11446"
},
{
"id": "2306.01116"
},
{
"id": "1911.02782"
}
] |
2309.10305 | 23 | Since our GPUs are shared among multiple users and tasks, the specific behavior of each task is unpredictable, often leading to idle GPU nodes within the cluster. Considering that a single machine equipped with eight A800 GPUs could adequately meet the memory requirements for our Baichuan 2-7B and Baichuan 2-13B models, the
4https://scipy.org/
primary design criterion for our training framework is the machine-level elasticity, which supports that resources for tasks can be dynamically modified according to the cluster status and thereby serves as the foundation for our smart scheduling algorithm. To meet the requirement of the machine-level elasticity, our training framework integrates tensor parallelism (Narayanan et al., 2021) and ZeRO- powered data parallelism (Rajbhandari et al., 2020), where we set tensor parallelism inside each machine and employ ZeRO shared data parallelism for elastic scaling across machines.
In addition, we employ a tensor-splitting technique (Nie et al., 2022) where we split certain calculations to reduce peak memory consumption, such as the cross-entropy calculations with large vocabularies. This approach enables us to meet memory needs without extra computing and communication, making the system more efficient. training without compromising model accuracy, we implement mixed-precision training, where we perform forward and backward computations in BFloat16, while performing optimizer updating in Float32. | 2309.10305#23 | Baichuan 2: Open Large-scale Language Models | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages other than English. In this technical report, we present Baichuan
2, a series of large-scale multilingual language models containing 7 billion
and 13 billion parameters, trained from scratch, on 2.6 trillion tokens.
Baichuan 2 matches or outperforms other open-source models of similar size on
public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan
2 excels in vertical domains such as medicine and law. We will release all
pre-training model checkpoints to benefit the research community in better
understanding the training dynamics of Baichuan 2. | http://arxiv.org/pdf/2309.10305 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu | cs.CL | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | cs.CL | 20230919 | 20230920 | [
{
"id": "2302.13971"
},
{
"id": "2307.12966"
},
{
"id": "1707.06347"
},
{
"id": "2305.18290"
},
{
"id": "2204.02311"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1802.05365"
},
{
"id": "2203.15556"
},
{
"id": "1607.06450"
},
{
"id": "2112.05682"
},
{
"id": "2108.12409"
},
{
"id": "2108.07732"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "2212.08073"
},
{
"id": "2306.01116"
},
{
"id": "1808.06226"
},
{
"id": "2110.14168"
},
{
"id": "2010.14701"
},
{
"id": "2206.04615"
},
{
"id": "1711.05101"
},
{
"id": "2210.09261"
},
{
"id": "2304.10592"
},
{
"id": "2204.05862"
},
{
"id": "2104.09864"
},
{
"id": "2304.08177"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2203.09509"
},
{
"id": "2210.02414"
},
{
"id": "2002.05202"
},
{
"id": "2209.13258"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.