id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2309.10621#52 | Large language models can accurately predict searcher preferences | Relevance Assessment: Are Judges Exchangeable and Does It Matter. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 667â 674. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency. & Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of â biasâ â in NLP. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 5454â 5476. Valeria Bolotova, Vladislav Blinov, Yukun Zheng, W Bruce Croft, Falk Scholer, and Mark Sanderson. 2020. | 2309.10621#51 | 2309.10621#53 | 2309.10621 | [
"2305.03495"
] |
2309.10621#53 | Large language models can accurately predict searcher preferences | Do people and neural nets pay attention to the same words: studying eye-tracking data for non-factoid QA evaluation. In Proceedings of the ACM International Conference on Information and Knowledge Management. 85â 94. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in neural information processing systems 29 (2016). Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv:2108.07258 [cs.LG] Andrei Broder. 2002. A taxonomy of web search. In ACM Sigir forum, Vol. 36. ACM New York, NY, USA, 3â 10. Jake Brutlag. 2009. Speed matters for Google web search. Online: https://services.google.com/fh/files/blogs/google_delayexp.pdf. Downloaded 2023-09-14.. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. | 2309.10621#52 | 2309.10621#54 | 2309.10621 | [
"2305.03495"
] |
2309.10621#54 | Large language models can accurately predict searcher preferences | Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183â 186. Ben Carterette, Paul N Bennett, David Maxwell Chickering, and Susan T Dumais. 2008. Here or there: Preference judgments for relevance. In Proceedings of the European Conference on Information Retrieval. 16â 27. Carlos Castillo, Debora Donato, Luca Becchetti, Paolo Boldi, Stefano Leonardi, Massimo Santini, and Sebastiano Vigna. 2006. | 2309.10621#53 | 2309.10621#55 | 2309.10621 | [
"2305.03495"
] |
2309.10621#55 | Large language models can accurately predict searcher preferences | A reference collection for web spam. SIGIR Forum 40, 2 (Dec. 2006), 11â 24. K. Alec Chrystal and Paul D. Mizen. 2001. Goodhartâ s law: Its origins, meaning and implications for monetary policy. Prepared for the Festschrift in honour of Charles Goodhart. Charles L A Clarke, Gianluca Demartini, Laura Dietz, Guglielmo Faggioli, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Ian Soboroff, Benno Stein, and Henning Wachsmuth. 2023. | 2309.10621#54 | 2309.10621#56 | 2309.10621 | [
"2305.03495"
] |
2309.10621#56 | Large language models can accurately predict searcher preferences | HMC: A spectrum of humanâ machine-collaborative relevance judgment frameworks. In Frontiers of Information Access Experimentation for Research and Education, Christine Bauer, Ben Carterette, Nicola Ferro, and Norbert Fuhr (Eds.). Vol. 13. Leibniz-Zentrum für Informatik. Issue 1. Paul Clough, Mark Sanderson, Jiayu Tang, Tim Gollins, and Amy Warner. 2013. Examining the limits of crowdsourcing for relevance assessment. IEEE Internet Computing 17, 4 (2013). Gordon V Cormack, Christopher R Palmer, and Charles L A Clarke. 1998. Efficient construction of large test collections. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 282â 289. Tadele T. Damessie, Taho P. Nghiem, Falk Scholer, and J. Shane Culpepper. 2017. | 2309.10621#55 | 2309.10621#57 | 2309.10621 | [
"2305.03495"
] |
2309.10621#57 | Large language models can accurately predict searcher preferences | Gauging the quality of relevance assessments using inter-rater agreement. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of AI in cloud instances. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency. 1877â | 2309.10621#56 | 2309.10621#58 | 2309.10621 | [
"2305.03495"
] |
2309.10621#58 | Large language models can accurately predict searcher preferences | 1894. Susan Dumais, Robin Jeffries, , Daniel M. Russell, Diane Tang, and Jaime Teevan. 2014. Understanding user behavior through log data and analysis. In Ways of knowing in HCI, Judith S. Olson and Wendy A. Kellogg (Eds.). Springer, New York, 349â 372. Guglielmo Faggioli, Laura Dietz, Charles Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, and Henning Wachsmuth. 2023. Perspectives on large language models for relevance judgment. arXiv:2304.09161 [cs.IR] Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. ChatGPT outperforms crowd-workers for text-annotation tasks. arXiv:2303.15056 [cs.CL] Hila Gonen and Yoav Goldberg. 2019. | 2309.10621#57 | 2309.10621#59 | 2309.10621 | [
"2305.03495"
] |
2309.10621#59 | Large language models can accurately predict searcher preferences | Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Paul Thomas, Seth Spielman, Nick Craswell, and Bhaskar Mitra 609â 614. Charles A E Goodhart. 1975. Problems of monetary management: The UK experience. In Papers in Monetary Economics. Vol. 1. Reserve Bank of Australia. | 2309.10621#58 | 2309.10621#60 | 2309.10621 | [
"2305.03495"
] |
2309.10621#60 | Large language models can accurately predict searcher preferences | Google LLC. 2022. General Guidelines. https://guidelines.raterhub.com/searchqualityevaluatorguidelines.pdf, Downloaded 29 July 2023.. William Hersh, Chris Buckley, TJ Leone, and David Hickam. 1994. OHSUMED: An interactive retrieval evaluation and new large test collection for research. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 192â 201. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop. http://arxiv.org/abs/1503.02531 | 2309.10621#59 | 2309.10621#61 | 2309.10621 | [
"2305.03495"
] |
2309.10621#61 | Large language models can accurately predict searcher preferences | Sebastian Hofstätter, Hamed Zamani, Bhaskar Mitra, Nick Craswell, and Allan Hanbury. 2020. Local self-attention over long text for efficient document retrieval. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021â 2024. Keith Hoskin. 1996. The â awfulâ idea of accountability: Inscribing people into the measurement of objects. In Accountability: Power, ethos and technologies of managing, R Munro and J Mouritsen (Eds.). International Thompson Business Press, London. | 2309.10621#60 | 2309.10621#62 | 2309.10621 | [
"2305.03495"
] |
2309.10621#62 | Large language models can accurately predict searcher preferences | Oana Inel, Tim Draws, and Lora Aroyo. 2023. Collect, measure, repeat: Reliability factors for responsible AI data collection. arXiv:2308.12885 [cs.LG] Andrej Karpathy. 2023. State of GPT. Seminar at Microsoft Build. https://build.microsoft.com/en-US/sessions/db3f4859-cd30-4445-a0cd-553c3304f8e2. Gabriella Kazai, Bhaskar Mitra, Anlei Dong, Nick Craswell, and Linjun Yang. 2022. | 2309.10621#61 | 2309.10621#63 | 2309.10621 | [
"2305.03495"
] |
2309.10621#63 | Large language models can accurately predict searcher preferences | Less is Less: When are Snippets Insufficient for Human vs Machine Relevance Estimation?. In Proceedings of the European Conference on Information Retrieval. 153â 162. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners. arvix:2205.11916 [cs.CL] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models. arXiv:2211.09110 [cs.CL] Tie-Yan Liu. 2009. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3, 3 (2009), 225â 331. | 2309.10621#62 | 2309.10621#64 | 2309.10621 | [
"2305.03495"
] |
2309.10621#64 | Large language models can accurately predict searcher preferences | Safiya Umoja Noble. 2018. Algorithms of oppression. In Algorithms of oppression. New York University Press. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R So, Maud Texier, and Jeff Dean. 2022. The carbon footprint of machine learning training will plateau, then shrink. Computer 55, 7 (2022), 18â 28. David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. (2021). arXiv:2104.10350 [cs.LG] Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. 2023. | 2309.10621#63 | 2309.10621#65 | 2309.10621 | [
"2305.03495"
] |
2309.10621#65 | Large language models can accurately predict searcher preferences | Automatic prompt optimization with â gradient descentâ and beam search. arXiv:2305.03495 Tefko Saracevic. 2008. Effects of inconsistent relevance judgments on information retrieval test results: A historical perspective. Library Trends 56, 4 (2008), 763â 783. Falk Scholer, Diane Kelly, Wan-Ching Wu, Hanseul S. Lee, and William Webber. 2013. The effect of threshold priming and need for cognition on relevance calibration and assessment. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 623â 632. | 2309.10621#64 | 2309.10621#66 | 2309.10621 | [
"2305.03495"
] |
2309.10621#66 | Large language models can accurately predict searcher preferences | Eric Schurman and Jake Brutlag. 2009. Performance related changes and their user impact. In Velocity web performance and operations conference. Latanya Sweeney. 2013. Discrimination in online ad delivery. Commun. ACM 56, 5 (2013), 44â 54. Paul Thomas, Gabriella Kazai, Ryen W. White, and Nick Craswell. 2022. The crowd is made of people: Observations from large-scale crowd labelling. In Proceedings of the Conference on Human Information Interaction and Retrieval. Rachel L. Thomas and David Uminsky. 2022. | 2309.10621#65 | 2309.10621#67 | 2309.10621 | [
"2305.03495"
] |
2309.10621#67 | Large language models can accurately predict searcher preferences | Reliance on metrics is a fundamental challenge for AI. Patterns 3, 5 (2022). Petter Törnberg. 2023. ChatGPT-4 outperforms experts and crowd workers in annotating political Twitter messages with zero-shot learning. arXiv:2304.06588 [cs.CL] Ellen M Voorhees. 1998. Variations in relevance judgments and the measurement of retrieval effectiveness. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 315â 323. Ellen M Voorhees. 2004. Overview of the TREC 2004 Robust Retrieval Track. In Proceedings of the Text REtrieval Conference. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. | 2309.10621#66 | 2309.10621#68 | 2309.10621 | [
"2305.03495"
] |
2309.10621#68 | Large language models can accurately predict searcher preferences | How far can camels go? Exploring the state of instruction tuning on open resources. arXiv:2306.04751 [cs.CL] William Webber, Alistair Moffat, and Justin Zobel. 2010. A Similarity Measure for Indefinite Rankings. ACM Transactions on Information Systems 28, 4, Article 20 (Nov. 2010). Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. arXiv:2201.11903 [cs.CL] Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, Fiona Aga, Jinshi Huang, Charles Bai, et al. 2022. Sustainable AI: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems 4 (2022), 795â | 2309.10621#67 | 2309.10621#69 | 2309.10621 | [
"2305.03495"
] |
2309.10621#69 | Large language models can accurately predict searcher preferences | 813. Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimisers. arXiv:2309.03409 [cs.LG] Large language models can accurately predict searcher preferences Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E. Gonzalez. 2022. | 2309.10621#68 | 2309.10621#70 | 2309.10621 | [
"2305.03495"
] |
2309.10621#70 | Large language models can accurately predict searcher preferences | TEMPERA: Test-time prompt editing via reinforcement learning. arXiv:2211.11890 [cs.CL] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv:2211.01910 [cs.LG] | 2309.10621#69 | 2309.10621 | [
"2305.03495"
] |
|
2309.10305#0 | Baichuan 2: Open Large-scale Language Models | 3 2 0 2 p e S 0 2 ] L C . s c [ 2 v 5 0 3 0 1 . 9 0 3 2 : v i X r a # Baichuan 2: Open Large-scale Language Models Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Chao Yin, Chenxu Lv, Da Pan Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu Baichuan Inc. # Abstract | 2309.10305#1 | 2309.10305 | [
"2302.13971"
] |
|
2309.10305#1 | Baichuan 2: Open Large-scale Language Models | Large have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, most powerful LLMs are closed-source or limited in their capability for languages other than English. In this technical report, we present Baichuan 2, a series of large-scale multilingual language models containing 7 billion and 13 billion parameters, trained from scratch, on 2.6 trillion tokens. Baichuan 2 matches or outperforms other open-source models of similar size on public benchmarks like MMLU, CMMLU, GSM8K, and HumanEval. Furthermore, Baichuan 2 excels in vertical domains such as medicine and law. We will release all pre-training model checkpoints to benefit the research community in better understanding the training dynamics of Baichuan 2. | 2309.10305#0 | 2309.10305#2 | 2309.10305 | [
"2302.13971"
] |
2309.10305#2 | Baichuan 2: Open Large-scale Language Models | 1 # 1 Introduction The field of large language models has witnessed promising and remarkable progress in recent years. The size of language models has grown from millions of parameters, such as ELMo (Peters et al., 2018), GPT-1 (Radford et al., 2018), to billions or even trillions of parameters such as GPT- 3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022; Anil et al., 2023) and Switch Transformers (Fedus et al., 2022). This increase in scale has led to significant improvements in the capabilities of language models, enabling more human-like fluency and the ability to perform a diverse range of natural language tasks. | 2309.10305#1 | 2309.10305#3 | 2309.10305 | [
"2302.13971"
] |
2309.10305#3 | Baichuan 2: Open Large-scale Language Models | With the introduction of ChatGPT (OpenAI, 2022) from OpenAI, the power of these models to generate human-like text has captured widespread public attention. ChatGPT demonstrates strong language proficiency across a variety of domains, from conversing casually to explaining complex concepts. This breakthrough highlights the potential for large language models to automate tasks involving natural language generation and comprehension. While there have been exciting breakthroughs and applications of LLMs, most leading LLMs like GPT-4 (OpenAI, 2023), PaLM-2 (Anil et al., 2023), and Claude (Claude, 2023) remain closed-sourced. Developers and researchers have limited access to the full model parameters, making it difficult for the community to deeply study or fine-tune these systems. More openness and transparency around LLMs could accelerate research and responsible development within this rapidly advancing field. LLaMA (Touvron et al., 2023a), a series of large language models developed by Meta containing up to 65 billion parameters, has significantly benefited the LLM research community by being fully open- sourced. The open nature of LLaMA, along with other open-source LLMs such as OPT (Zhang et al., 2022), Bloom (Scao et al., 2022), MPT (MosaicML, 2023) and Falcon (Penedo et al., 2023), enables researchers to freely access the models for examination, experimentation, and further development. This transparency and access distinguishes LLaMA from other proprietary LLMs. By providing full access, the open-source LLMs have accelerated research and advances in the field, leading to new models like Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), and others (Wang et al., 2022; Zhu et al., 2023; Anand et al., 2023). | 2309.10305#2 | 2309.10305#4 | 2309.10305 | [
"2302.13971"
] |
2309.10305#4 | Baichuan 2: Open Large-scale Language Models | Authors are listed alphabetically, correspondent: [email protected]. However, most open-source large language models have focused primarily on English. For instance, the main data source for LLaMA is Common Crawl1, which comprises 67% of LLaMAâ s pre-training data but is filtered to English content only. Other open source LLMs such as MPT (MosaicML, 2023) and Falcon (Penedo et al., 2023) are also focused on English and have limited capabilities in other languages. This hinders the development and application of LLMs in specific languages, such as Chinese. In this technical report, we introduce Baichuan 2, a series of large-scale multilingual language models. Baichuan 2 has two separate models, Baichuan 2-7B with 7 billion parameters and Baichuan 2-13B with 13 billion parameters. Both models were trained on 2.6 trillion tokens, which to our knowledge is the largest to date, more than double that of Baichuan 1 (Baichuan, 2023b,a). With such a massive amount of training data, Baichuan 2 achieves significant improvements over Baichuan 1. On general benchmarks like MMLU (Hendrycks et al., 2021a), CMMLU (Li et al., 2023), and C-Eval (Huang et al., 2023), Baichuan 2-7B achieves nearly 30% higher performance compared to Baichuan 1-7B. Specifically, Baichuan 2 is optimized to improve performance on math and code problems. On the GSM8K (Cobbe et al., 2021) and HumanEval (Chen et al., 2021) evaluations, Baichuan 2 nearly doubles the results of the Baichuan 1. In addition, Baichuan 2 also demonstrates strong performance on medical and legal domain tasks. On benchmarks such as MedQA (Jin et al., 2021) and JEC-QA (Zhong et al., 2020), Baichuan 2 outperforms other open- source models, making it a suitable foundation model for domain-specific optimization. | 2309.10305#3 | 2309.10305#5 | 2309.10305 | [
"2302.13971"
] |
2309.10305#5 | Baichuan 2: Open Large-scale Language Models | Additionally, we also released two chat models, Baichuan 2-7B-Chat and Baichuan 2- 13B-Chat, optimized to follow human instructions. These models excel at dialogue and context understanding. We will elaborate on our approaches to improve the safety of Baichuan 2. By open-sourcing these models, we hope to enable the community to further improve the safety of large language models, facilitating more research on responsible LLMs development. Furthermore, in spirit of research collaboration and continuous improvement, we are also releasing the checkpoints of Baichuan 2 at various stages | 2309.10305#4 | 2309.10305#6 | 2309.10305 | [
"2302.13971"
] |
2309.10305#6 | Baichuan 2: Open Large-scale Language Models | 1https://commoncrawl.org/ of training from 200 billion tokens up to the full 2.6 trillion tokens. We found that even for the 7 billion parameter model, performance continued to improve after training on more than 2.6 trillion tokens. By sharing these intermediary results, we hope to provide the community with greater insight into the training dynamics of Baichuan 2. Understanding these dynamics is key to unraveling the inner working mechanism of large language models (Biderman et al., 2023a; Tirumala et al., 2022). We believe the release of these checkpoints will pave the way for further advances in this rapidly developing field. In this technical report, we will also share some of the trials, errors, and lessons learned In the following through training Baichuan 2. sections, we will present detailed modifications made to the vanilla Transformer architecture and our training methodology. We will then describe our fine-tuning methods to align the foundation model with human preferences. Finally, we will benchmark the performance of our models against other LLMs on a set of standard tests. Throughout the report, we aim to provide transparency into our process, including unsuccessful experiments, to advance collective knowledge in developing LLMs. | 2309.10305#5 | 2309.10305#7 | 2309.10305 | [
"2302.13971"
] |
2309.10305#7 | Baichuan 2: Open Large-scale Language Models | Baichuan 2â s foundation models and chat models are available for both research and commercial use at https://github.com/ baichuan-inc/Baichuan2 # 2 Pre-training This section introduces the training procedure for the Baichuan 2 foundation models. Before diving into the model details, we first show the overall performance of the Baichuan 2 base models compared to other open or closed-sourced models in Table 1. We then describe our pre-training data and data processing methods. Next, we elaborate on the Baichuan 2 architecture and scaling results. Finally, we describe the distributed training system. # 2.1 Pre-training Data Data sourcing: During data acquisition, our objective is to pursue comprehensive data scalability and representativeness. We gather data from diverse sources including general internet webpages, books, research papers, codebases, and more to build an extensive world knowledge system. | 2309.10305#6 | 2309.10305#8 | 2309.10305 | [
"2302.13971"
] |
2309.10305#8 | Baichuan 2: Open Large-scale Language Models | The composition of the training corpus is shown in Figure 1. GPT-4 GPT-3.5 Turbo 83.93 68.54 70.33 54.06 66.15 47.07 63.27 46.13 75.12 61.59 89.99 57.77 69.51 52.44 LLaMA-7B LLaMA 2-7B MPT-7B Falcon-7B ChatGLM 2-6B (base)â | 2309.10305#7 | 2309.10305#9 | 2309.10305 | [
"2302.13971"
] |
2309.10305#9 | Baichuan 2: Open Large-scale Language Models | Baichuan 1-7B Baichuan 2-7B-Base 27.10 28.90 27.15 24.23 51.70 42.80 54.00 35.10 45.73 27.93 26.03 47.86 42.30 54.16 26.75 31.38 26.00 25.66 - 44.02 57.07 27.81 25.97 26.54 24.24 - 36.34 47.47 28.17 26.53 24.83 24.10 - 34.44 42.73 32.38 39.16 35.20 28.77 33.68 32.48 41.56 9.78 16.22 8.64 5.46 32.37 9.17 24.49 11.59 12.80 14.02 - - 9.20 18.29 LLaMA-13B 28.50 LLaMA 2-13B 35.80 Vicuna-13B 32.80 Chinese-Alpaca-Plus-13B 38.80 XVERSE-13B 53.70 Baichuan 1-13B-Base 52.40 58.10 Baichuan 2-13B-Base 46.30 55.09 52.00 43.90 55.21 51.60 59.17 31.15 37.99 36.28 33.43 58.44 55.30 61.97 28.23 30.83 30.11 34.78 44.69 49.69 54.33 28.22 32.29 31.55 35.46 42.54 43.20 48.17 37.89 46.98 43.04 28.94 38.06 43.01 48.78 20.55 28.89 28.13 11.98 18.20 26.76 52.77 15.24 15.24 16.46 16.46 15.85 11.59 17.07 | 2309.10305#8 | 2309.10305#10 | 2309.10305 | [
"2302.13971"
] |
2309.10305#10 | Baichuan 2: Open Large-scale Language Models | # 7B 13B Table 1: Overall results of Baichuan 2 compared with other similarly sized LLMs on general benchmarks. * denotes results derived from official websites. 4 3 F & 2 Sg g& % 4 2 me sg s %â ¢ 6% 2 Ee 3 5 â wy % % %93 8a 2 ey %, â , 2 9 Ge Ss Cf Himany itiog Mass media Histor Religion | 2309.10305#9 | 2309.10305#11 | 2309.10305 | [
"2302.13971"
] |
2309.10305#11 | Baichuan 2: Open Large-scale Language Models | # 2.2 Architecture The model architecture of Baichuan 2 is based on the prevailing Transformer (Vaswani et al., 2017). Nevertheless, we made several modifications which we detailed below. # 2.3 Tokenizer A tokenizer needs to balance two critical factors: a high compression rate for efficient inference, and an appropriately sized vocabulary to ensure adequate training of each word embedding. We have taken both these aspects into account. We have expanded the vocabulary size from 64,000 in Baichuan 1 to 125,696, aiming to strike a balance between computational efficiency and model performance. | 2309.10305#10 | 2309.10305#12 | 2309.10305 | [
"2302.13971"
] |
2309.10305#12 | Baichuan 2: Open Large-scale Language Models | Figure 1: The distribution of different categories of Baichuan 2 training data. Data processing: For data processing, we focus on data frequency and quality. Data frequency relies on clustering and deduplication. We built a large-scale deduplication and clustering system supporting both LSH-like features and dense embedding features. This system can cluster and deduplicate trillion-scale data within hours. Based on the clustering, individual documents, paragraphs, and sentences are deduplicated and scored. Those scores are then used for data sampling in pre-training. The size of the training data at different stages of data processing is shown in Figure 2. Tokenizer LLaMA 2 Bloom ChatGLM 2 Baichuan 1 Baichuan 2 Vocab Size Compression Rate â 32,000 250,680 64,794 64,000 125,696 1.037 0.501 0.527 0.570 0.498 Table 2: The vocab size and text compression rate of Baichuan 2â s tokenizer compared with other models. | 2309.10305#11 | 2309.10305#13 | 2309.10305 | [
"2302.13971"
] |
2309.10305#13 | Baichuan 2: Open Large-scale Language Models | The lower the better. We use byte-pair encoding (BPE) (Shibata et al., 1999) from SentencePiece (Kudo and Richardson, 2018) to tokenize the data. Specifically, we do not apply any normalization to the input text and we Bract Heuristic deduplication approach 70.1% 68.34% 100% im 29.89% Sent-wise quality filter Document-wise deduplication 31.68% 50.81% Sent-wise, paragraph-wise deduplication 65.28% 5. | 3.06% 19.13% | 2309.10305#12 | 2309.10305#14 | 2309.10305 | [
"2302.13971"
] |
2309.10305#14 | Baichuan 2: Open Large-scale Language Models | Figure 2: The data processing procedure of Baichuan 2â s pre-training data. positional embedding hidden size FFN size num heads num layers seq. length max LR RoPE ALiBi 4,096 5,120 11,008 13,696 32 40 32 40 4,096 4,096 2e-4 1.5e-4 Table 3: Model details of Baichuan 2. do not add a dummy prefix as in Baichuan 1. | 2309.10305#13 | 2309.10305#15 | 2309.10305 | [
"2302.13971"
] |
2309.10305#15 | Baichuan 2: Open Large-scale Language Models | We split numbers into individual digits to better encode numeric data. To handle code data containing extra whitespaces, we add whitespace-only tokens to the tokenizer. The character coverage is set to 0.9999, with rare characters falling back to UTF-8 bytes. We set the maximum token length to 32 to account for long Chinese phrases. The training data for the Baichuan 2 tokenizer comes from the Baichuan 2 pre-training corpus, with more sampled code examples and academic papers to improve coverage (Taylor et al., 2022). Table 2 shows a detailed comparison of Baichuan 2â s tokenizer with others. | 2309.10305#14 | 2309.10305#16 | 2309.10305 | [
"2302.13971"
] |
2309.10305#16 | Baichuan 2: Open Large-scale Language Models | # 2.3.1 Positional Embeddings To enable further research on bias-based and multiplication-based attention, we apply RoPE on Baichuan 2-7B and ALiBi on Baichuan 2-13B, consistent with Baichuan 1. # 2.4 Activations and Normalizations We use SwiGLU (Shazeer, 2020) activation function, a switch-activated variant of GLU (Dauphin et al., 2017) which shows improved results. However, SwiGLU has a â | 2309.10305#15 | 2309.10305#17 | 2309.10305 | [
"2302.13971"
] |
2309.10305#17 | Baichuan 2: Open Large-scale Language Models | bilinearâ layer and contains three parameter matrices, differing from the vanilla Transformerâ s feed-forward layer that has two matrices, so we reduce the hidden size from 4 times the hidden size to 8 3 hidden size and rounded to the multiply of 128. Building on Baichuan 1, we adopt Rotary Positional Embedding (RoPE) (Su et al., 2021) for Baichuan 2-7B and ALiBi (Press et al., 2021) for Baichuan 2-13B. ALiBi is a more recent positional encoding technique that has shown improved extrapolation performance. However, most open-sourced models use RoPE for positional embeddings, and optimized attention implementations like Flash Attention (Dao et al., 2022; Dao, 2023) are currently better suited to RoPE since it is multiplication-based, bypassing the need for passing attention_mask to the attention operation. Nevertheless, in preliminary experiments, the choice of positional embedding did not significantly impact model performance. For the attention layer of Baichuan 2, we adopt the memory efficient attention (Rabe and Staats, 2021) implemented by xFormers2. By leveraging xFormersâ optimized attention with biasing capabilities, we can efficiently incorporate ALiBiâ s bias-based positional encoding while reducing memory overhead. This provides performance and efficiency benefits for Baichuan 2â | 2309.10305#16 | 2309.10305#18 | 2309.10305 | [
"2302.13971"
] |
2309.10305#18 | Baichuan 2: Open Large-scale Language Models | s large-scale training. We apply Layer Normalization (Ba et al., 2016) to the input of the Transformer block which is more robust to the warm-up schedule (Xiong et al., 2020). In addition, we use the RMSNorm implementation # 2https://github.com/facebookresearch/ xformers introduced by (Zhang and Sennrich, 2019), which only calculates the variance of input features to improve efficiency. # 2.5 Optimizations We use AdamW (Loshchilov and Hutter, 2017) optimizer for training. β1 and β2 are set to 0.9 and 0.95, respectively. We use weight decay with 0.1 and clip the grad norm to 0.5. The models are warmed up with 2,000 linear scaling steps reaching to the max learning rate and then applying the cosine decay to the minimum learning rate. The parameter details and learning rate are shown in Table 3. The whole models are trained using BFloat16 mixed precision. Compared to Float16, BFloat16 has a better dynamic range, making it more robust to large values that are critical in training large language models. However, BFloat16â | 2309.10305#17 | 2309.10305#19 | 2309.10305 | [
"2302.13971"
] |
2309.10305#19 | Baichuan 2: Open Large-scale Language Models | s low precision causes issues in some settings. For instance, in some public RoPE and ALibi implementations, the torch.arange operation fails due to collisions when the integer exceeds 256, preventing differentiation of nearby positions. Therefore, we use full precision for some value- sensitive operations such as positional embeddings. NormHead: To stabilize training and improve the model performance, we normalize the output embeddings (which are also referred as â headâ ). There are two advantages of NormHead in our experiment. First, in our preliminary experiments we found that the norm of the head are prone to be unstable. The norm of the rare tokenâ | 2309.10305#18 | 2309.10305#20 | 2309.10305 | [
"2302.13971"
] |
2309.10305#20 | Baichuan 2: Open Large-scale Language Models | s embedding becomes smaller during training which disturb the training dynamics. NormHead can stabilize the dynamics significantly. Second, we found that the semantic information is mainly encoded by the cosine similarity of Embedding rather than L2 distance. Since the current linear classifier computes logits by dot product, which is a mixture of L2 distance and cosine similarity. NormHead alleviates the distraction of L2 distance in computing logits. For more details, please refer appendix B. Max-z loss: During training, we found that the logits of LLMs could become very large. While the softmax function is agnostic to the absolute logit values, as it depends only on their relative values. Large logits caused issues during inference because common implementations of repetition penalty (such as the Hugging Face implementation3 in model.generate) apply a scalar (e.g. 1.1 or 1.2) directly to the logits. Contracting very large logits in this way can significantly alter the probabilities after softmax, making the model sensitive to the choice of repetition penalty hyper- parameter. Inspired by NormSoftmax (Jiang et al., 2023b) and the auxiliary z-loss from PaLM (Chowdhery et al., 2022), we added a max-z loss to normalize the logits: Lmax-z = 2eâ | 2309.10305#19 | 2309.10305#21 | 2309.10305 | [
"2302.13971"
] |
2309.10305#21 | Baichuan 2: Open Large-scale Language Models | 4 â z2 (1) where z is the maximum logit value. This helped stabilize training and made the inference more robust to hyper-parameters. 2.6 â Baichuan 2-7B 25 â Baichuan 2-138 0 500 1000 1500 2000 2500 billion tokens Figure 3: The pre-training loss of Baichuan 2. The final training loss of Baichuan 2-7B and Baichuan 2-13B are shown in Figure 3. | 2309.10305#20 | 2309.10305#22 | 2309.10305 | [
"2302.13971"
] |
2309.10305#22 | Baichuan 2: Open Large-scale Language Models | # 2.6 Scaling Laws Neural scaling laws, where the error decreases as a power function of training set size, model size, or both, have enabled an assuring performance when training became more and more expensive in deep learning and large language models. Before training the large language models of billions of parameters, we first train some small-sized models and fit a scaling law for training larger models. We launched a range of model sizes going from 10M to 3B, ranging from 1 10 the size of the final model, and each of the model is trained for up to 1 trillion tokens, using consistent hyper- parameters and the same data set sourced from Baichuan 2. | 2309.10305#21 | 2309.10305#23 | 2309.10305 | [
"2302.13971"
] |
2309.10305#23 | Baichuan 2: Open Large-scale Language Models | Based on the final loss of different 3https://huggingface.co/transformers/ v4.1.1/_modules/transformers/generation_ logits_process.html models, we can obtain a mapping from the training flops to the target loss. Model Losses 2 â 10M Model 50M Model 100M Model 300M Model 800M Model - 1,58 Mode! g â 138 Model 1 7" To 13 Te 1 1 Log FLOPs Figure 4: | 2309.10305#22 | 2309.10305#24 | 2309.10305 | [
"2302.13971"
] |
2309.10305#24 | Baichuan 2: Open Large-scale Language Models | The scaling law of Baichuan 2. We trained various models ranging from 10 million to 3 billion parameters with 1 trillion tokens. By fitting a power law term to the losses given training flops, we predicted losses for training Baichuan 2-7B and Baichuan 2-13B on 2.6 trillion tokens. This fitting process precisely predicted the final modelsâ losses (marked with two stars). To fit the scaling law of the model, we employed the formula given by Henighan et al. (2020): | 2309.10305#23 | 2309.10305#25 | 2309.10305 | [
"2302.13971"
] |
2309.10305#25 | Baichuan 2: Open Large-scale Language Models | LC = a à Cb + Lâ (2) where Lâ is the irreducible loss and the first term is the reducible loss which is formulated as a power-law scaling term. C are training flops and the LC are final loss of the model in that flops. We used the curve_fit function from the SciPy4 library to fit the parameters. The final fitted scaling curve and the predicted 7 billion and 13 billion parameters modelâ s final loss are shown in Figure 4. We can see that the fitted scaling law predicted Baichuan 2â s final loss with high accuracy. | 2309.10305#24 | 2309.10305#26 | 2309.10305 | [
"2302.13971"
] |
2309.10305#26 | Baichuan 2: Open Large-scale Language Models | # 2.7 Infrastructure Efficiently leveraging existing GPU resources plays a critically important role in training and developing large language models today. To accomplish this, we develop a co-design approach for an elastic training framework and a smart cluster scheduling policy. Since our GPUs are shared among multiple users and tasks, the specific behavior of each task is unpredictable, often leading to idle GPU nodes within the cluster. Considering that a single machine equipped with eight A800 GPUs could adequately meet the memory requirements for our Baichuan 2-7B and Baichuan 2-13B models, the | 2309.10305#25 | 2309.10305#27 | 2309.10305 | [
"2302.13971"
] |
2309.10305#27 | Baichuan 2: Open Large-scale Language Models | 4https://scipy.org/ primary design criterion for our training framework is the machine-level elasticity, which supports that resources for tasks can be dynamically modified according to the cluster status and thereby serves as the foundation for our smart scheduling algorithm. To meet the requirement of the machine-level elasticity, our training framework integrates tensor parallelism (Narayanan et al., 2021) and ZeRO- powered data parallelism (Rajbhandari et al., 2020), where we set tensor parallelism inside each machine and employ ZeRO shared data parallelism for elastic scaling across machines. In addition, we employ a tensor-splitting technique (Nie et al., 2022) where we split certain calculations to reduce peak memory consumption, such as the cross-entropy calculations with large vocabularies. This approach enables us to meet memory needs without extra computing and communication, making the system more efficient. training without compromising model accuracy, we implement mixed-precision training, where we perform forward and backward computations in BFloat16, while performing optimizer updating in Float32. Furthermore, in order to efficiently scale our training cluster to thousands of GPUs, we integrate the following techniques to avoid the degradation of communication efficiency: â ¢ Topology-aware distributed training. In large- scale clusters, network connections frequently span multiple layers of switches. We strategically arrange the ranks for distributed training to minimize frequent access across different switches, which reduces latency and thereby enhances overall training efficiency. | 2309.10305#26 | 2309.10305#28 | 2309.10305 | [
"2302.13971"
] |
2309.10305#28 | Baichuan 2: Open Large-scale Language Models | â ¢ Hybrid and hierarchical partition for ZeRO. across GPUs, By partitioning parameters ZeRO3 reduces memory consumption at the expense of additional all-gather communications. This approach would lead to a significant communication bottleneck when scaling to thousands of GPUs (Jiang et al., 2023a). To address this issue, we propose a hybrid and hierarchical partitioning scheme. Specifically, our framework first partitions the optimizer states across all GPUs, and then adaptively decides which layers need to activate ZeRO3, and whether partitioning parameters hierarchically. By integrating these strategies, our system is capable of training Baichuan 2-7B and Baichuan 2-13B models efficiently on 1,024 NVIDIA A800 GPUs, achieving a computational efficiency that exceeds 180 TFLOPS. | 2309.10305#27 | 2309.10305#29 | 2309.10305 | [
"2302.13971"
] |
2309.10305#29 | Baichuan 2: Open Large-scale Language Models | # 3 Alignment Baichuan 2 also introduces the alignment procedure resulting in two chat models: Baichuan 2-7B-Chat and Baichuan 2-13B-Chat. The alignment process of the Baichuan 2 encompasses two main components: Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). # 3.1 Supervised Fine-Tuning During the supervised fine-tuning phase, we use human labelers to annotate prompts gathered from various data sources. Each prompt is labeled as being helpful or harmless based on key principles similar to Claude (2023). | 2309.10305#28 | 2309.10305#30 | 2309.10305 | [
"2302.13971"
] |
2309.10305#30 | Baichuan 2: Open Large-scale Language Models | To validate data quality, we use cross-validationâ an authoritative annotator checks the quality of a sample batch annotated by a specific crowd worker group, rejecting any batches that do not meet our quality standards. We collected over 100k supervised fine-tuning samples and trained our base model on them. Next, we delineated the reinforcement learning process via the RLHF method to further improve results. The whole process of RLHF, including RM and RL training, is shown in Figure 5. 2 Ee Ch Human Guideline Reward Mode! | 2309.10305#29 | 2309.10305#31 | 2309.10305 | [
"2302.13971"
] |
2309.10305#31 | Baichuan 2: Open Large-scale Language Models | Response 1 Response 2 i Score 1 Score 2 Strain Prompt i L {| Response 3 Response4 Score 3 â Score 4 Data Model Pool || == Variants â Save Checkpoints <â ___â â _ Po Prompt/ Stato i Figure 5: An illustration of Baichuan 2â s RLHF process. # 3.2 Reward Model We devised a three-tiered classification system for all prompts, consisting of 6 primary categories, 30 secondary categories, and over 200 tertiary categories. | 2309.10305#30 | 2309.10305#32 | 2309.10305 | [
"2302.13971"
] |
2309.10305#32 | Baichuan 2: Open Large-scale Language Models | From the userâ s perspective, we aim for the classification system to comprehensively cover all types of user needs. From the standpoint of reward model training, prompts within each Score Gap Test Acc. 3 54.5% 61.1% 70.2% 77.8% 81.5% 1 2 4 5 Table 4: Reward Model test accuracy on different score gaps of two responses. The larger the response gap, the better RM accuracy. The gap 1,2,3,4,5 correspond to unsure, negligibly better, slightly better, better, and significantly better, respectively. | 2309.10305#31 | 2309.10305#33 | 2309.10305 | [
"2302.13971"
] |
2309.10305#33 | Baichuan 2: Open Large-scale Language Models | category should have sufficient diversity to ensure the reward model can generalize well. Given a prompt, responses are generated by Baichuan 2 models of different sizes and stages (SFT, PPO) to enhance response diversity. Only responses generated by the Baichuan 2 model family are used in the RM training. Responses from other open-source datasets and proprietary models do not improve the reward modelâ s accuracy. This also underscores the intrinsic consistency of the Baichuan 2 model series from another perspective. | 2309.10305#32 | 2309.10305#34 | 2309.10305 | [
"2302.13971"
] |
2309.10305#34 | Baichuan 2: Open Large-scale Language Models | The loss function used for training the reward in InstructGPT model The reward model (Ouyang et al., 2022). derived from training exhibits a performance consistent with that of LLaMA 2 (Touvron et al., 2023b), the greater the score difference between two responses, the higher the discriminative accuracy of the reward model, as shown in Table 4. # 3.3 PPO After obtaining the reward model, we employ the PPO (Schulman et al., 2017) algorithm to train our language model. We employ four models: the actor model (responsible for generating responses), the reference model (used to compute the KL penalty with fixed parameters), the reward model (providing an overarching reward for the entire response with fixed parameters), and the critic model (designed to learn per-token values). | 2309.10305#33 | 2309.10305#35 | 2309.10305 | [
"2302.13971"
] |
2309.10305#35 | Baichuan 2: Open Large-scale Language Models | # 3.4 Training Details During the RLHF training process, the critic model is warmed up with an initial 20 training steps ahead. Subsequently, both the critic and actor models are updated via the standard PPO algorithm. For all models, we use gradient clipping of 0.5, a constant learning rate of 5e-6, and a PPO clip threshold ϵ = 0.1. We set the KL penalty coefficient β = 0.2, decaying to 0.005 over steps. We train for 350 iterations for all our chat models, resulting in Baichuan 2-7B-Chat and Baichuan 2-13B-Chat. | 2309.10305#34 | 2309.10305#36 | 2309.10305 | [
"2302.13971"
] |
2309.10305#36 | Baichuan 2: Open Large-scale Language Models | # 4 Safety We believe that model safety improvements stem not only from constraints during data cleansing or alignment stages but also from harnessing positive knowledge and identifying negative knowledge during all training stages. Guided by this concept, we have enhanced model safety throughout the Baichuan 2 training process. # 4.1 Pre-training Stage In the pre-training stage, we pay close attention to data safety. The entire pre-training dataset underwent a rigorous data filtering process aimed at enhancing safety. We devised a system of rules and models to eliminate harmful content such as violence, pornography, racial discrimination, hate speech, and more. Furthermore, we curated a Chinese-English bilingual dataset comprising several million webpages from hundreds of reputable websites that represent various positive value domains, encompassing areas such as policy, law, vulnerable groups, general values, traditional virtues, and more. We also heightened the sampling probability for this dataset. | 2309.10305#35 | 2309.10305#37 | 2309.10305 | [
"2302.13971"
] |
2309.10305#37 | Baichuan 2: Open Large-scale Language Models | # 4.2 Alignment Stage We build a red-teaming procedure consisting of 6 types of attacks and 100+ granular safety value categories, an expert annotation team of 10 with traditional internet security experience initialized safe alignment prompts. The relevant snippets from the pre-training dataset were retrieved to create responses, resulting in approximately 1K annotated data for initialization. â ¢ The expert annotation team guided a 50-person outsourced annotation team through red-blue confrontation with the initialized alignment model, resulting in the generation of 200K attack prompts. specialized multi-value supervised sampling method, we maximized the utilization of attack data to generate responses at varying safety levels. During the RL optimization stage, we also take During the RL optimization stage, we also take safety into the first account: # safety into the first account: â | 2309.10305#36 | 2309.10305#38 | 2309.10305 | [
"2302.13971"
] |
2309.10305#38 | Baichuan 2: Open Large-scale Language Models | ¢ At the onset of safety reinforcement, DPO (Rafailov et al., 2023) methods efficiently employed limited amounts of annotated data to enhance performance concerning specific vulnerability issues. â ¢ By employing a Reward Model that integrates Helpful and Harmless objectives, PPO safety reinforcement training was conducted. # 5 Evaluations In this section, we report the zero-shot or few-shot results of the pre-trained base models on standard benchmarks. We evaluate Baichuan 2 on free-form generation tasks and multiple-choice tasks. â ¢ Free-form generation: Models are given some sample inputs (shots) and then generate continuations to obtain results, like for question answering, translation, and other tasks. Multiple-choice: Models are given a question and multiple choices, and the task is to select the most appropriate candidates. Given the variety of tasks and examples, we incorporated open-source evaluation frameworks like lm-evaluation-harness (Gao et al., 2021) and OpenCompass (OpenCompass, 2023) into our in-house implementations for fair benchmarking against other models. The models we choose to compare have similar sizes to Baichuan 2 and are open-sourced that the results can reproduced: â ¢ LLaMA (Touvron et al., 2023b): | 2309.10305#37 | 2309.10305#39 | 2309.10305 | [
"2302.13971"
] |
2309.10305#39 | Baichuan 2: Open Large-scale Language Models | The language models trained by Meta on 1 trillion tokens. The context length is 2,048 and we evaluate both LLaMA 7B and LLaMA 13B. â ¢ LLaMA 2 (Touvron et al., 2023c): A successor model to LLaMA 1 trained on 2 trillion tokens and better data mixture. â ¢ Baichuan 1 (Baichuan, 2023b): The Baichuan 7B is trained on 1.2 trillion tokens and Baichuan 13B is trained on 1.4 trillion tokens. | 2309.10305#38 | 2309.10305#40 | 2309.10305 | [
"2302.13971"
] |
2309.10305#40 | Baichuan 2: Open Large-scale Language Models | Both of them focus on English and Chinese. â ¢ ChatGLM 2-6B (Zeng et al., 2022): A chat language model that has strong performance on several benchmarks5. â ¢ MPT-7B (MosaicML, 2023): An open-source LLMs trained 1 trillion tokens of English text and code. â ¢ Falcon-7B (Penedo et al., 2023): A series of LLMs trained on 1 trillion tokens enhanced with curated corpora. | 2309.10305#39 | 2309.10305#41 | 2309.10305 | [
"2302.13971"
] |
2309.10305#41 | Baichuan 2: Open Large-scale Language Models | It is made available under the Apache 2.0 license. â ¢ Vicuna-13B (Chiang et al., 2023): A language model trained by fine-tuning LLaMA-13B on the 5They do not release their base models so we adopt the result they report in their website. conversational dataset generated by ChatGPT. â ¢ Chinese-Alpaca-Plus-13B (Cui et al., 2023): A language model trained by fine-tuning LLaMA- 13B on the conversational dataset generated by ChatGPT. large language model trained on more than 1.4 trillion tokens. | 2309.10305#40 | 2309.10305#42 | 2309.10305 | [
"2302.13971"
] |
2309.10305#42 | Baichuan 2: Open Large-scale Language Models | # 5.1 Overall Performance This section introduces the overall performance of Baichuan 2 base models compared with other similar-sized models. We choose 8 benchmarks for comparison: MMLU (Hendrycks et al., 2021a) The Massive Multitask Language Understanding consists of a range of multiple-choice questions on academic subjects. C-Eval (Huang et al., 2023) is a comprehensive Chinese evaluation benchmark consists of more than 10k multi-choice questions. CMMLU (Li et al., 2023) is also a general evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of the Chinese language and culture. AGIEval (Zhong et al., 2023) is a human-centric benchmark specifically designed to evaluate general abilities like human cognition and problem-solving. Gaokao (Zhang et al., 2023) is an evaluation framework that utilizes Chinese high school entrance examination questions. BBH (Suzgun et al., 2022) is a suite of challenging BIG-Bench (Srivastava et al., 2022) tasks that the language model evaluations did not outperform the average human-rater. GSM8K (Cobbe et al., 2021) is an evaluation benchmarks that focused on math. HumanEval (Chen et al., 2021) is a docstring-to- code dataset consisting of 164 coding problems that test various aspects of programming logic. For CMMLU and MMLU, we adopt the official implementations and adopt 5-shot for evaluation. For BBH we adopt 3-shot evaluations. For C-Eval, Gaokao, and AGIEval we only select the multiple- choice with four candidates for better evaluations. For GSM8K, we adopt 4-shot testing derived from OpenCompass (OpenCompass, 2023). We also incorporate the result of GPT-46 and GPT-3.5- Turbo7. Unless stated otherwise, the results in this paper were obtained using our internal evaluation tools. The overall result is shown in Table 1. Compared | 2309.10305#41 | 2309.10305#43 | 2309.10305 | [
"2302.13971"
] |
2309.10305#43 | Baichuan 2: Open Large-scale Language Models | 6gpt-4-0613 7gpt-3.5-turbo-0613 with other similar-sized open-sourced models, our model has a clear performance advantage. Especially in math and code problems, our model achieves significant improvement over Baichuan 1. # 5.2 Vertical Domain Evaluations We also evaluate Baichuan 2 in vertical domains, where we choose the law and medical field as they has been widely studied in recent years. In the law field, we report scores of JEC-QA (Zhong et al., 2020), which is collected from the National Judicial Examination of China. | 2309.10305#42 | 2309.10305#44 | 2309.10305 | [
"2302.13971"
] |
2309.10305#44 | Baichuan 2: Open Large-scale Language Models | It contains multiple-choice and multiple-answer questions. For compatibility with our evaluation suite, we only test the multiple-choice questions. In the medical field, we report scores from two medical benchmarks, MedQA (Jin et al., 2021) and MedMCQA (Pal et al., 2022), as well as average scores from medical-related disciplines in C-Eval (val), MMLU, and CMMLU (abbreviated as CMC). Specifically, MedMCQA is collected from the professional medical board exams in the USA and China, including three subsets, i.e., USMLE, MCMLE and TWMLE, and we report the results of USMLE and MCMLE with five candidates; MedMCQA is collected from from Indian medical entrance exams, and we evaluate multiple-choice questions and report the scores in the dev set. The detail of MedMCQA includes (1) clinical medicine, basic medicine of C-Eval (val), (2) clinical knowledge, anatomy, college medicine, college biology, nutrition, virology, medical genetics, professional medicine of MMLU, (3) anatomy, clinical knowledge, college medicine, genetics, nutrition, traditional chinese medicine, virology of CMMLU. | 2309.10305#43 | 2309.10305#45 | 2309.10305 | [
"2302.13971"
] |
2309.10305#45 | Baichuan 2: Open Large-scale Language Models | Moreover, all these datasets are evaluated in 5-shot. As shown in Table 5 Baichuan 2-7B-Base surpasses models such as GPT-3.5 Turbo, ChatGLM 2-6B, and LLaMA 2-7B in the field of Chinese law, second only to GPT-4. Compared to Baichuan 1-7B, Baichuan 2-7B-Base shows an improvement of nearly 10 points. In the medical field, Baichuan 2-7B-Base outperforms models like ChatGLM 2-6B and LLaMA 2-7B, showing significant improvement over Baichuan 1-7B as well. Similarly, Baichuan 2-13B-Base surpasses models other than GPT-4 in the field of Chinese law. In the medical domain, Baichuan 2-13B- Base outperforms models such as XVERSE-13B and LLaMA 2-13B. Compared to Baichuan 1- 13B-Base, Baichuan 2-13B-Base also exhibits remarkable improvement. | 2309.10305#44 | 2309.10305#46 | 2309.10305 | [
"2302.13971"
] |
2309.10305#46 | Baichuan 2: Open Large-scale Language Models | # 5.3 Math and Code This section introduces the performance in mathematics and coding. We use GSM8K (Cobbe et al., 2021) (4-shot) and MATH (Hendrycks et al., 2021b) (4-shot) to evaluate the mathematical ability. MATH contains 12,500 mathematical questions that are harder to be solved. To evaluate the modelâ s code ability, we report the scores in HumanEval (Chen et al., 2021) (0-shot) and MBPP (Austin et al., 2021) (3-shot). â ¢ HumanEval is a series of programming tasks including model language comprehension, reasoning, algorithms, and simple mathematics to evaluate the correctness of the model and measure the modelâ s problem-solving ability. â | 2309.10305#45 | 2309.10305#47 | 2309.10305 | [
"2302.13971"
] |
2309.10305#47 | Baichuan 2: Open Large-scale Language Models | ¢ MBPP. It consists of a dataset of 974 Python short functions and program textual descriptions, along with test cases used to verify the correctness of their functionality. We use OpenCompass to evaluate the ability of models in math and code. As shown in Table 6, in the field of mathematics, Baichuan 2-7B- Base surpasses models like LLaMA 2-7B. In the code domain, it outperforms models of the same size such as ChatGLM 2-6B. Baichuan 2-7B-Base exhibits significant improvement compared to the Baichuan 1-7B model. In mathematics, Baichuan 2-13B-Base surpasses all models of the same size, approaching the level of GPT-3.5 Turbo. In the code domain, Baichuan 2-13B-Base outperforms models like LLaMA 2- 13B and XVERSE-13B. Baichuan 2-13B-Base demonstrates significant improvement compared to Baichuan 1-13B-Base. | 2309.10305#46 | 2309.10305#48 | 2309.10305 | [
"2302.13971"
] |
2309.10305#48 | Baichuan 2: Open Large-scale Language Models | # 5.4 Multilingual We use Flores-101 (NLLB Team, 2022; Goyal et al., 2021; Guzmán et al., 2019) to evaluate Flores-101 covers 101 multilingual ability. Its data is languages from around the world. sourced from various domains such as news, travel guides, and books. We selected the official languages of the United Nations (Arabic (ar), Chinese (zh), English (en), French (fr), Russian (ru), and Spanish (es)), as well as German (de) and Japanese (ja), as the test languages. | 2309.10305#47 | 2309.10305#49 | 2309.10305 | [
"2302.13971"
] |
2309.10305#49 | Baichuan 2: Open Large-scale Language Models | We conducted 8-shot tests on seven subtasks in Flores- hou core a at agrment hob core bore ety igrnent safoty sore ator slety signment aly score bforesaeyaignent hou core a at agrment safoty sore ator slety signment hob core bore ety igrnent aly score bforesaeyaignent Figure 6: Helpfulness and harmlessness before and after safety alignment of Baichuan 2. The x-axis shows the metric before safety alignment and the y-axis shows the result after. We see that helpfulness remains largely unchanged after this procedure, while harmlessness improved substantially (more mass in upper triangle) with safety efforts. | 2309.10305#48 | 2309.10305#50 | 2309.10305 | [
"2302.13971"
] |
2309.10305#50 | Baichuan 2: Open Large-scale Language Models | 101 , including zh-en, zh-fr, zh-es, zh-ar, zh-ru, zh-ja and zh-de. The evaluation is conducted with OpenCompass. In the multilingual domain, as shown in Table 7, Baichuan 2-7B-Base surpasses all models of the same size in all seven tasks and shows significant improvement compared to Baichuan 1-7B. Baichuan 2-13B-Base outperforms models of the same size in four out of the seven tasks. In the zh-en and zh-ja tasks, it surpasses GPT3.5 Turbo and reaches the level of GPT-4. Compared to Baichuan 1-13B-Base, Baichuan 2-13B-Base exhibits significant improvement in the zh-ar, zh- ru, and zh-ja tasks. Although GPT-4 still dominates in the field of multilingualism, open-source models are catching up closely. In zh-en tasks, Baichuan 2-13B-Base has slightly surpassed GPT-4. | 2309.10305#49 | 2309.10305#51 | 2309.10305 | [
"2302.13971"
] |
2309.10305#51 | Baichuan 2: Open Large-scale Language Models | # 5.5 Safety Evaluations In Sec. 4, we describe the efforts made to improve the safety of Baichuan 2. However, some prior work indicates that helpfulness and harmlessness are two sides of a seesaw - when harmlessness increases, helpfulness could lead to a bit decrease (Bai et al., 2022a). So we evaluate these two factors before and after safety alignments. Figure 6 shows the helpfulness and harmlessness before and after the safety alignment of Baichuan 2. We can see that our safety alignment process did not hurt the helpfulness while significantly improving the harmlessness. Then we evaluate the safety of our pre-trained models using the Toxigen (Hartvigsen et al., 2022) dataset. Same as LLaMA 2, we use the cleaned | 2309.10305#50 | 2309.10305#52 | 2309.10305 | [
"2302.13971"
] |
2309.10305#52 | Baichuan 2: Open Large-scale Language Models | GPT-4 GPT-3.5 Turbo 59.32 42.31 77.16 61.17 80.28 53.81 74.58 52.92 72.51 56.25 7B LLaMA-7B LLaMA2-7B MPT-7B Falcon-7B ChatGLM2-6B Baichuan 1-7B Baichuan 2-7B-Base 27.45 29.20 27.45 23.66 40.76 34.64 44.46 33.34 36.75 26.67 25.33 44.54 42.37 56.39 24.12 27.49 16.97 21.29 26.24 27.42 32.68 21.72 24.78 19.79 18.07 45.53 39.46 54.93 27.45 37.93 31.96 33.88 30.22 31.39 41.73 13B LLaMA-13B LLaMA 2-13B Vicuna-13B Chinese-Alpaca-Plus-13B XVERSE-13B Baichuan 1-13B-Base Baichuan 2-13B-Base 27.54 34.08 28.38 35.32 46.42 41.34 47.40 35.14 47.42 40.99 46.31 58.08 51.77 59.33 28.83 35.04 34.80 27.49 32.99 29.07 40.38 23.38 29.74 27.67 32.66 58.76 43.67 61.62 39.52 42.12 40.66 35.87 41.34 39.60 42.86 | 2309.10305#51 | 2309.10305#53 | 2309.10305 | [
"2302.13971"
] |
2309.10305#53 | Baichuan 2: Open Large-scale Language Models | Table 5: The result of Baichuan 2 compared with other models on law and medical filed. GPT-4 GPT-3.5 Turbo GSM8K MATH HumanEval MBPP 63.60 61.40 89.99 57.77 40.20 13.96 69.51 52.44 LLaMA-7B LLaMA 2-7B MPT-7B Falcon-7B ChatGLM 2-6B Baichuan 1-7B Baichuan 2-7B-Base 9.78 16.22 8.64 5.46 28.89 9.17 24.49 3.02 3.24 2.90 1.68 6.40 2.54 5.58 11.59 12.80 14.02 - 9.15 9.20 18.29 14.00 14.80 23.40 10.20 9.00 6.60 24.20 LLaMA-13B LLaMA 2-13B Vicuna-13B Chinese-Alpaca-Plus-13B XVERSE-13B Baichuan 1-13B-Base Baichuan 2-13B-Base 20.55 28.89 28.13 11.98 18.20 26.76 52.77 3.68 4.96 4.36 2.50 2.18 4.84 10.08 15.24 15.24 16.46 16.46 15.85 11.59 17.07 21.40 27.00 15.00 20.00 16.80 22.80 30.20 Table 6: The result of Baichuan 2 compared with other models on mathematics and coding. zh-en zh-fr 29.94 29.56 20.01 10.76 18.62 13.26 20.83 19.70 27.67 26.15 19.58 10.73 17.45 # Average | 2309.10305#52 | 2309.10305#54 | 2309.10305 | [
"2302.13971"
] |
2309.10305#54 | Baichuan 2: Open Large-scale Language Models | GPT-4 GPT-3.5 Turbo 20.43 17.59 1.82 LLaMA-7B LLaMA 2-7B MPT-7B Falcon-7B ChatGLM 2-6B Baichuan 1-7B Baichuan 2-7B-Base 17.27 12.02 9.54 25.76 15.14 11.92 8.96 20.77 9.53 9.28 22.13 15.67 22.28 7.77 9.42 25.07 16.51 12.72 27.27 20.87 16.17 0.00 0.79 0.10 0.11 0.64 0.41 1.39 4.47 4.99 3.54 1.35 1.78 6.66 11.21 1.41 2.20 2.91 0.41 0.26 2.24 3.11 8.73 10.15 6.54 6.41 4.61 9.86 12.76 7.63 10.14 7.48 7.91 6.68 10.50 13.25 LLaMA-13B 21.75 16.16 13.29 25.44 19.25 17.49 LLaMA 2-13B Vicuna-13B 22.63 18.04 14.67 Chinese-Alpaca-Plus-13B 22.53 13.82 11.29 29.26 24.03 16.67 XVERSE-13B Baichuan 1-13B-Base 30.24 20.90 15.92 30.61 22.11 17.27 Baichuan 2-13B-Base 0.58 1.38 0.70 0.28 2.78 0.98 2.39 10.66 0.41 7.61 11.13 0.13 10.34 10.25 3.59 9.27 8.13 0.31 1.52 14.26 3.08 11.61 9.65 12.00 2.64 14.17 11.58 14.53 10.07 12.17 11.31 8.27 14.53 13.19 16.09 | 2309.10305#53 | 2309.10305#55 | 2309.10305 | [
"2302.13971"
] |
2309.10305#55 | Baichuan 2: Open Large-scale Language Models | Table 7: The result of Baichuan 2 compared with other models on multilingual field. version from the SafeNLP project8, distinguishing neutral and hate types for the 13 minority groups, forming a 6-shot dataset consistent with the original Toxigen prompt format. Our decoding parameters use temperature 0.1 and top-p 0.9 nucleus sampling. We use the fine-tuned HateBert version optimized in the Toxigen (Hartvigsen et al., 2022) for model evaluation. | 2309.10305#54 | 2309.10305#56 | 2309.10305 | [
"2302.13971"
] |
2309.10305#56 | Baichuan 2: Open Large-scale Language Models | Table 8 shows that compared to LLaMA 2, the Baichuan 2-7B and Baichuan 2-13B model has some safety advantages. To ensure comprehensive coverage within each category, We ask human annotators to generate 1,400 data samples. This was further expanded through self-instruction and cleaned by humans for fluency, resulting in 70,000 total samples with 10,000 per category. Examples of those safety prompts and principles are shown in the Appendix D. We use those samples to evaluate different models and the result is shown in Table 9. We can see that Baichuan 2 is on par or outperforms other chat models in our safety evaluations. Model Toxigen â Baichuan 2-13B Baichuan 2-7B LLaMA 2-7B LLaMA 2-13B 11.48 11.72 12.28 13.24 | 2309.10305#55 | 2309.10305#57 | 2309.10305 | [
"2302.13971"
] |
2309.10305#57 | Baichuan 2: Open Large-scale Language Models | # Intermediate Checkpoints We will also release the intermediate checkpoints of 7B models, from 220 billion tokens checkpoint to 2,640 billion tokens checkpoint, which is the final output of Baichuan 2-7B-Base. We examine their performance on several benchmarks and the result is shown in Figure 7. Table 8: Toxigen results of Baichuan 2 foundation models compared with LLaMA 2. Inspired by BeaverTails Ji et al. (2023)9, we constructed the Baichuan Harmless Evaluation Dataset safety (BHED), covering 7 major categories of bias/discrimination, insults/profanity, illegal/unethical content, physical health, mental health, financial privacy, and sensitive topics to evaluate the safety of our chat models. As shown in the figure, Baichuan 2 demonstrates consistent improvement as training proceeds. | 2309.10305#56 | 2309.10305#58 | 2309.10305 | [
"2302.13971"
] |
2309.10305#58 | Baichuan 2: Open Large-scale Language Models | Even after 2.6 trillion tokens, there appears to be ample room for further gains. This aligns with previous work on scaling LLMs indicating that data size is a critical factor (Hoffmann et al., 2022). In the Appendix C, we provide more detailed training dynamics for both the 7B and 13B models. # 6 Related Work 8https://github.com/microsoft/SafeNLP/ tree/main 9https://github.com/PKU-Alignment/ beavertails | 2309.10305#57 | 2309.10305#59 | 2309.10305 | [
"2302.13971"
] |
2309.10305#59 | Baichuan 2: Open Large-scale Language Models | The field of language models has undergone a renaissance in recent years, sparked largely by the development of deep neural networks and ChatGLM 2-6B Vicuna 13B LLaMA 2 7B-chat LLaMA 2 13B-chat Chinese Alpaca 2-13B Baichuan 2-7B-chat Baichuan 2-13B-chat s e e t o n siti v 61.80% 61.00% 51.90% 53.40% 53.20% 78.20% 87.10% p i c s d is c ri m i n a ti o n p r o f a n it y u n e t h i c a l c 96.40% 99.10% 97.31% 98.03% 99.10% 98.32% 97.25% 95.23% 98.23% 97.25% 98.27% 99.04% 85.12% 96.34% 93.17% 96.00% 99.10% 97.12% 98.97% 99.10% 98.36% o n t e n t p h y si c a l h e a lt h 100.00% 99.80% 99.60% 100.00% 99.60% 100.00% 100.00% m e n t a l h e a lt h 98.23% 99.40% 98.23% 99.80% 99.31% 99.80% 99.80% fi n a y c a c i a l p ri v g e r a n v A 97.34% 93.01% 98.50% 93.58% 90.83% 95.34% 92.25% 97.79% 89.04% 96.53% 95.45% 96.84% 97.50% 98.12% e | 2309.10305#58 | 2309.10305#60 | 2309.10305 | [
"2302.13971"
] |
2309.10305#60 | Baichuan 2: Open Large-scale Language Models | Table 9: The result of different chat models on our safety evaluation benchmarks. & # C-Eval5-shot 4 # MMLU65-shot # 4 CMMLU 5-shot 60 50 40 30 20 220 440 660 880 Baichuan 2-7B Checkpoints (in billions of tokens) 1100 1320 1540 1760 1980 2200 2420 2640 Figure 7: The results of intermediary checkpoints of Baichuan 2-7B which will be released to the public. | 2309.10305#59 | 2309.10305#61 | 2309.10305 | [
"2302.13971"
] |
2309.10305#61 | Baichuan 2: Open Large-scale Language Models | Transformers (Vaswani et al., 2017). Kaplan et al. (2020) proposed the scaling laws for large model pre-training. By systematically analyzing model performance as parameters and data size increased, they provided a blueprint for the current era of massive models with hundreds of or even billions of parameters. Seizing upon these scaling laws, organizations like OpenAI, Google, Meta, and Anthropic have engaged in a computing arms race to create ever- larger LLMs. Spurred by the OpenAIâ s 175 billion parameters proprietary language model GPT-3 (Brown et al., 2020). The few-shot or even zero-shot ability of LLMs has revolved most natural language understanding tasks. | 2309.10305#60 | 2309.10305#62 | 2309.10305 | [
"2302.13971"
] |
2309.10305#62 | Baichuan 2: Open Large-scale Language Models | From code generation to math-solving problems or even open- world scenarios. Specialized scientific LLMs like Galactica (Taylor et al., 2022) have also emerged to showcase the potential for large models to assimilate technical knowledge. However, raw parameter count alone does not determine model capability - Chinchilla (Hoffmann et al., 2022) demonstrated that scaling model capacity according to the number of tokens, rather than just parameters, can yield better sample efficiency. Concurrent with the development of private LLMs, academic and non-profit efforts have worked to develop open-source alternatives like Bloom (Scao et al., 2022), OPT (Zhang et al., 2022) and Pythia (Biderman et al., 2023b). Although some open-source large language models contain up to 175 billion parameters, most are trained on only 500 billion tokens or less. This is relatively small considering that 7 billion parameter models can still significantly improve after being trained on trillions of tokens. Among those open-sourced models, LLaMA (Touvron et al., 2023b) and its successor LLaMA 2 (Touvron et al., 2023c) stands out for its performance and transparency. Which was quickly optimized by the community for better inference speed and various applications. In addition to those foundation models, a lot of chat models have also been proposed to follow human instructions. Most of them fine-tune the foundation models to align with human (OpenAI, 2022; Wang et al., 2023). Those chat models have demonstrated a marked improvement in understanding human instructions and solving complex tasks (Chiang et al., 2023; Xu et al., 2023; Sun et al., 2023). To further improve alignment, (Ouyang et al., 2022) incorporates the Reinforcement Learning from Human Feedback (RLHF) approach. This involves learning from human preferences by training a reward model on human-rated outputs. Other methods such as direct preference optimization (DPO) (Rafailov et al., 2023) and reinforcement learning from AI feedback (RLAIF) (Bai et al., 2022b) have also been proposed to improve the RLHF both in terms of efficiency and effectiveness. | 2309.10305#61 | 2309.10305#63 | 2309.10305 | [
"2302.13971"
] |
2309.10305#63 | Baichuan 2: Open Large-scale Language Models | # 7 Limitations and Ethical Considerations Like other large language models, Baichuan 2 also faces ethical challenges. Itâ s prone to biases and toxicity, especially given that much of its training data originates from the internet. Despite our best efforts to mitigate these issues using benchmarks like Toxigen (Hartvigsen et al., 2022), the risks cannot be eliminated, and toxicity tends to increase with model size. Moreover, the knowledge of Baichuan 2 models is static and can be outdated or incorrect, posing challenges in fields that require up-to-date information like medicine or law. While optimized for Chinese and English for safety, the model has limitations in other languages and may not fully capture biases relevant to non-Chinese cultures. | 2309.10305#62 | 2309.10305#64 | 2309.10305 | [
"2302.13971"
] |
2309.10305#64 | Baichuan 2: Open Large-scale Language Models | Thereâ s also the potential for misuse, as the model could be used to generate harmful or misleading content. Although we try our best efforts to balance safety and utility, some safety measures may appear as over-cautions, affecting the modelâ s usability for certain tasks. We encourage users to make responsible and ethical use of Baichuan 2 models. Meanwhile, we will continue to optimize these issues and release updated versions in the future. # References Yuvanesh Anand, Zach Nussbaum, Brandon Duderstadt, Benjamin Schmidt, and Andriy Mulyar. 2023. | 2309.10305#63 | 2309.10305#65 | 2309.10305 | [
"2302.13971"
] |
2309.10305#65 | Baichuan 2: Open Large-scale Language Models | Gpt4all: Training an assistant-style chatbot with large scale data distillation from gpt-3.5-turbo. GitHub. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. | 2309.10305#64 | 2309.10305#66 | 2309.10305 | [
"2302.13971"
] |
2309.10305#66 | Baichuan 2: Open Large-scale Language Models | Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073. Baichuan. 2023a. A 13b large language model developed by baichuan intelligent technology. Baichuan. 2023b. A large-scale 7b pretraining language model developed by baichuan-inc. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023a. | 2309.10305#65 | 2309.10305#67 | 2309.10305 | [
"2302.13971"
] |
2309.10305#67 | Baichuan 2: Open Large-scale Language Models | Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397â 2430. PMLR. Stella Rose Biderman, Hailey Schoelkopf, Quentin G. Anthony, Herbie Bradley, Kyle Oâ Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023b. | 2309.10305#66 | 2309.10305#68 | 2309.10305 | [
"2302.13971"
] |
2309.10305#68 | Baichuan 2: Open Large-scale Language Models | Pythia: A suite for analyzing large language models across training and scaling. ArXiv, abs/2304.01373. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â | 2309.10305#67 | 2309.10305#69 | 2309.10305 | [
"2302.13971"
] |
2309.10305#69 | Baichuan 2: Open Large-scale Language Models | 1901. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. | 2309.10305#68 | 2309.10305#70 | 2309.10305 | [
"2302.13971"
] |
2309.10305#70 | Baichuan 2: Open Large-scale Language Models | Evaluating large language models trained on code. CoRR, abs/2107.03374. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023). | 2309.10305#69 | 2309.10305#71 | 2309.10305 | [
"2302.13971"
] |
2309.10305#71 | Baichuan 2: Open Large-scale Language Models | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Claude. 2023. Conversation with Claude AI assistant. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. | 2309.10305#70 | 2309.10305#72 | 2309.10305 | [
"2302.13971"
] |
2309.10305#72 | Baichuan 2: Open Large-scale Language Models | Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Yiming Cui, Ziqing Yang, and Xin Yao. 2023. Efficient and effective text encoding for chinese llama and alpaca. arXiv preprint arXiv:2304.08177. Tri Dao. 2023. FlashAttention-2: Faster attention with better parallelism and work partitioning. Tri Dao, Daniel Y. | 2309.10305#71 | 2309.10305#73 | 2309.10305 | [
"2302.13971"
] |
2309.10305#73 | Baichuan 2: Open Large-scale Language Models | Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In International conference on machine learning, pages 933â 941. PMLR. William Fedus, Barret Zoph, and Noam Shazeer. 2022. | 2309.10305#72 | 2309.10305#74 | 2309.10305 | [
"2302.13971"
] |
2309.10305#74 | Baichuan 2: Open Large-scale Language Models | Switch transformers: Scaling to trillion parameter The models with simple and efficient sparsity. Journal of Machine Learning Research, 23(1):5232â 5270. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. | 2309.10305#73 | 2309.10305#75 | 2309.10305 | [
"2302.13971"
] |
2309.10305#75 | Baichuan 2: Open Large-scale Language Models | A framework for few-shot language model evaluation. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marcâ Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation low-resource and multilingual benchmark for machine translation. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marcâ | 2309.10305#74 | 2309.10305#76 | 2309.10305 | [
"2302.13971"
] |
2309.10305#76 | Baichuan 2: Open Large-scale Language Models | Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. | 2309.10305#75 | 2309.10305#77 | 2309.10305 | [
"2302.13971"
] |
2309.10305#77 | Baichuan 2: Open Large-scale Language Models | Measuring massive multitask language understanding. In ICLR. OpenReview.net. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical arXiv problem solving with the math dataset. preprint arXiv:2103.03874. Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, and Scaling laws for et al. Scott Gray. 2020. autoregressive generative modeling. arXiv preprint arXiv:2010.14701. | 2309.10305#76 | 2309.10305#78 | 2309.10305 | [
"2302.13971"
] |
2309.10305#78 | Baichuan 2: Open Large-scale Language Models | Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute- arXiv preprint optimal large language models. arXiv:2203.15556. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. | 2309.10305#77 | 2309.10305#79 | 2309.10305 | [
"2302.13971"
] |
2309.10305#79 | Baichuan 2: Open Large-scale Language Models | C-eval: A multi-level multi-discipline chinese evaluation arXiv preprint suite for arXiv:2305.08322. Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Chi Zhang, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2023. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. Youhe Jiang, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, and Bin Cui. 2023a. | 2309.10305#78 | 2309.10305#80 | 2309.10305 | [
"2302.13971"
] |
2309.10305#80 | Baichuan 2: Open Large-scale Language Models | Osdp: Optimal sharded data parallel for distributed deep learning. arXiv preprint arXiv:2209.13258. Zixuan Jiang, Jiaqi Gu, and David Z Pan. 2023b. Normsoftmax: Normalizing the input of softmax to accelerate and stabilize training. In 2023 IEEE International Conference on Omni-layer Intelligent Systems (COINS), pages 1â 6. IEEE. Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. | 2309.10305#79 | 2309.10305#81 | 2309.10305 | [
"2302.13971"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.