id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2309.16609#137 | Qwen Technical Report | Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. YaRN: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071, 2023a. Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023b. | 2309.16609#136 | 2309.16609#138 | 2309.16609 | [
"2305.20050"
] |
2309.16609#138 | Qwen Technical Report | Qwen Team, Alibaba Group. Evaluation benchmark for code intepreter, 2023a. URL https: //github.com/QwenLM/Qwen-Agent/tree/main/benchmark. Qwen Team, Alibaba Group. Evaluation benchmark for tool usage through ReAct prompting, 2023b. URL https://github.com/QwenLM/Qwen-7B/tree/main/eval. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. understanding by generative pre-training. Technical report, OpenAI, 2018. | 2309.16609#137 | 2309.16609#139 | 2309.16609 | [
"2305.20050"
] |
2309.16609#139 | Qwen Technical Report | Improving language Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. | 2309.16609#138 | 2309.16609#140 | 2309.16609 | [
"2305.20050"
] |
2309.16609#140 | Qwen Technical Report | Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â | 2309.16609#139 | 2309.16609#141 | 2309.16609 | [
"2305.20050"
] |
2309.16609#141 | Qwen Technical Report | 5551, 2020. Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017. Scott E. Reed, Konrad Zolna, Emilio Parisotto, Sergio G´omez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. | 2309.16609#140 | 2309.16609#142 | 2309.16609 | [
"2305.20050"
] |
2309.16609#142 | Qwen Technical Report | A generalist agent. Trans. Mach. Learn. Res., 2022, 2022. URL https://openreview.net/forum?id=1ikK0kHjvj. Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al. | 2309.16609#141 | 2309.16609#143 | 2309.16609 | [
"2305.20050"
] |
2309.16609#143 | Qwen Technical Report | Code Llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. | 2309.16609#142 | 2309.16609#144 | 2309.16609 | [
"2305.20050"
] |
2309.16609#144 | Qwen Technical Report | SocialIQA: Com- monsense reasoning about social interactions. CoRR, abs/1904.09728, 2019. URL http: //arxiv.org/abs/1904.09728. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagn´e, Alexandra Sasha Luccioni, Franc¸ois Yvon, Matthias Gall´e, et al. | 2309.16609#143 | 2309.16609#145 | 2309.16609 | [
"2305.20050"
] |
2309.16609#145 | Qwen Technical Report | BLOOM: A 176B- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. | 2309.16609#144 | 2309.16609#146 | 2309.16609 | [
"2305.20050"
] |
2309.16609#146 | Qwen Technical Report | 30 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Noam Shazeer. GLU variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. | 2309.16609#145 | 2309.16609#147 | 2309.16609 | [
"2305.20050"
] |
2309.16609#147 | Qwen Technical Report | Hug- gingGPT: Solving AI tasks with ChatGPT and its friends in HuggingFace. arXiv preprint arXiv:2303.17580, 2023. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019. | 2309.16609#146 | 2309.16609#148 | 2309.16609 | [
"2305.20050"
] |
2309.16609#148 | Qwen Technical Report | Qingyi Si, Tong Wang, Naibin Gu, Rui Liu, and Zheng Lin. Alpaca-CoT: An instruction-tuning platform with unified interface of instruction collection, parameter-efficient methods, and large language models, 2023. URL https://github.com/PhoebusSi/alpaca-CoT. Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. | 2309.16609#147 | 2309.16609#149 | 2309.16609 | [
"2305.20050"
] |
2309.16609#149 | Qwen Technical Report | Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023. Stability AI. StableBeluga2, 2023. URL https://huggingface.co/stabilityai/ StableBeluga2. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. | 2309.16609#148 | 2309.16609#150 | 2309.16609 | [
"2305.20050"
] |
2309.16609#150 | Qwen Technical Report | Advances in Neural Information Processing Systems, 33:3008â 3021, 2020. Jianlin Su. Improving transformer: Length extrapolation ability and position robustness, 2023a. URL https://spaces.ac.cn/archives/9444. Jianlin Su. The magical effect of the Bias term: RoPE + Bias = better length extrapolation, 2023b. URL https://spaces.ac.cn/archives/9577. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. | 2309.16609#149 | 2309.16609#151 | 2309.16609 | [
"2305.20050"
] |
2309.16609#151 | Qwen Technical Report | Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021. Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, and Xipeng Qiu. | 2309.16609#150 | 2309.16609#152 | 2309.16609 | [
"2305.20050"
] |
2309.16609#152 | Qwen Technical Report | MOSS: Training conversational language models from synthetic data, 2023a. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023b. Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. Marc Szafraniec, Baptiste Rozi`ere, Hugh Leather, Patrick Labatut, Franc¸ois Charton, and Gabriel Synnaeve. | 2309.16609#151 | 2309.16609#153 | 2309.16609 | [
"2305.20050"
] |
2309.16609#153 | Qwen Technical Report | Code translation with compiler representations. In The Eleventh International Confer- ence on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=XomEU3eNeSQ. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. | 2309.16609#152 | 2309.16609#154 | 2309.16609 | [
"2305.20050"
] |
2309.16609#154 | Qwen Technical Report | In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4149â 4158. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1421. URL https://doi.org/10.18653/v1/n19-1421. | 2309.16609#153 | 2309.16609#155 | 2309.16609 | [
"2305.20050"
] |
2309.16609#155 | Qwen Technical Report | 31 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford Alpaca: An instruction-following LLaMA model, 2023. URL https://github.com/tatsu-lab/stanford_alpaca. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. | 2309.16609#154 | 2309.16609#156 | 2309.16609 | [
"2305.20050"
] |
2309.16609#156 | Qwen Technical Report | Galactica: A large language model for science, 2022. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Kathleen S. Meier- Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Ol- son, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Ag¨uera y Arcas, Claire Cui, Marian Croak, Ed H. Chi, and Quoc Le. | 2309.16609#155 | 2309.16609#157 | 2309.16609 | [
"2305.20050"
] |
2309.16609#157 | Qwen Technical Report | LaMDA: Language models for dialog applications. CoRR, abs/2201.08239, 2022. URL https://arxiv.org/abs/2201.08239. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. | 2309.16609#156 | 2309.16609#158 | 2309.16609 | [
"2305.20050"
] |
2309.16609#158 | Qwen Technical Report | LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur´elien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. | 2309.16609#157 | 2309.16609#159 | 2309.16609 | [
"2305.20050"
] |
2309.16609#159 | Qwen Technical Report | Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/ 10.48550/arXiv.2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. | 2309.16609#158 | 2309.16609#160 | 2309.16609 | [
"2305.20050"
] |
2309.16609#160 | Qwen Technical Report | Attention is all you need. Advances in neural information processing systems, 30, 2017. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. Self- consistency improves chain of thought reasoning in language models. | 2309.16609#159 | 2309.16609#161 | 2309.16609 | [
"2305.20050"
] |
2309.16609#161 | Qwen Technical Report | ArXiv, abs/2203.11171, 2022. In Conference on Empirical Methods in Natural Language Processing, 2017. URL https://api. semanticscholar.org/CorpusID:910689. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. | 2309.16609#160 | 2309.16609#162 | 2309.16609 | [
"2305.20050"
] |
2309.16609#162 | Qwen Technical Report | How far can camels go? Exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023b. 32 Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-Instruct: Aligning language models with self-generated instructions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 13484â | 2309.16609#161 | 2309.16609#163 | 2309.16609 | [
"2305.20050"
] |
2309.16609#163 | Qwen Technical Report | 13508. Association for Computational Linguistics, 2023c. doi: 10.18653/v1/2023.acl-long.754. URL https://doi.org/10.18653/v1/ 2023.acl-long.754. Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859, 2021. | 2309.16609#162 | 2309.16609#164 | 2309.16609 | [
"2305.20050"
] |
2309.16609#164 | Qwen Technical Report | Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi D. Q. Bui, Junnan Li, and Steven C. H. Hoi. CodeT5+: Open code large language models for code understanding and generation. CoRR, abs/2305.07922, 2023d. doi: 10.48550/arXiv.2305.07922. URL https://doi.org/10. 48550/arXiv.2305.07922. | 2309.16609#163 | 2309.16609#165 | 2309.16609 | [
"2305.20050"
] |
2309.16609#165 | Qwen Technical Report | Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a. URL https://openreview.net/forum?id= gEZrGCozdqR. | 2309.16609#164 | 2309.16609#166 | 2309.16609 | [
"2305.20050"
] |
2309.16609#166 | Qwen Technical Report | Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022b. URL https://api.semanticscholar.org/ CorpusID:249674500. | 2309.16609#165 | 2309.16609#167 | 2309.16609 | [
"2305.20050"
] |
2309.16609#167 | Qwen Technical Report | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022c. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. | 2309.16609#166 | 2309.16609#168 | 2309.16609 | [
"2305.20050"
] |
2309.16609#168 | Qwen Technical Report | HuggingFaceâ s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019. Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. ExpertPrompting: Instructing large language models to be distinguished experts. arXiv preprint arXiv:2305.14688, 2023a. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. | 2309.16609#167 | 2309.16609#169 | 2309.16609 | [
"2305.20050"
] |
2309.16609#169 | Qwen Technical Report | WizardLM: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023b. Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196, 2023c. Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, and Yang Liu. Exploring large language models for communication games: An empirical study on werewolf. arXiv preprint arXiv:2309.04658, 2023d. | 2309.16609#168 | 2309.16609#170 | 2309.16609 | [
"2305.20050"
] |
2309.16609#170 | Qwen Technical Report | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, and Zhiying Wu. | 2309.16609#169 | 2309.16609#171 | 2309.16609 | [
"2305.20050"
] |
2309.16609#171 | Qwen Technical Report | Baichuan 2: Open large-scale language models. Technical report, Baichuan Inc., 2023. URL https://cdn.baichuan-ai.com/paper/Baichuan2-technical-report. pdf. 33 # Jianxin Yang. Firefly. https://github.com/yangjianxin1/Firefly, 2023. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. | 2309.16609#170 | 2309.16609#172 | 2309.16609 | [
"2305.20050"
] |
2309.16609#172 | Qwen Technical Report | ReAct: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mPLUG-Owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. | 2309.16609#171 | 2309.16609#173 | 2309.16609 | [
"2305.20050"
] |
2309.16609#173 | Qwen Technical Report | Metamath: Bootstrap your own mathematical questions for large language models, 2023. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models, 2023a. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023b. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. | 2309.16609#172 | 2309.16609#174 | 2309.16609 | [
"2305.20050"
] |
2309.16609#174 | Qwen Technical Report | RRHF: Rank responses to align language models with human feedback without tears, 2023c. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. MAmmoTH: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. | 2309.16609#173 | 2309.16609#175 | 2309.16609 | [
"2305.20050"
] |
2309.16609#175 | Qwen Technical Report | HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David R. Traum, and Llu´ıs M`arquez (eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pp. 4791â 4800. Association for Computational Linguistics, 2019. doi: 10.18653/v1/p19-1472. URL https: //doi.org/10.18653/v1/p19-1472. | 2309.16609#174 | 2309.16609#176 | 2309.16609 | [
"2305.20050"
] |
2309.16609#176 | Qwen Technical Report | Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. GLM-130B: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022. Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. | 2309.16609#175 | 2309.16609#177 | 2309.16609 | [
"2305.20050"
] |
2309.16609#177 | Qwen Technical Report | RepoCoder: Repository-level code completion through iterative retrieval and generation. CoRR, abs/2303.12570, 2023a. doi: 10.48550/arXiv.2303.12570. URL https://doi.org/ 10.48550/arXiv.2303.12570. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. | 2309.16609#176 | 2309.16609#178 | 2309.16609 | [
"2305.20050"
] |
2309.16609#178 | Qwen Technical Report | OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. Evaluating the performance of large language models on GAOKAO benchmark. CoRR, abs/2305.12474, 2023b. doi: 10.48550/arXiv.2305.12474. URL https://doi.org/10.48550/arXiv. 2305.12474. | 2309.16609#177 | 2309.16609#179 | 2309.16609 | [
"2305.20050"
] |
2309.16609#179 | Qwen Technical Report | Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. CodeGeeX: A pre-trained model for code generation with multilingual evaluations on humaneval-x. CoRR, abs/2303.17568, 2023. doi: 10.48550/arXiv.2303.17568. URL https://doi.org/10.48550/arXiv.2303.17568. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. | 2309.16609#178 | 2309.16609#180 | 2309.16609 | [
"2305.20050"
] |
2309.16609#180 | Qwen Technical Report | AGIEval: A human-centric benchmark for evaluating foundation models. CoRR, abs/2304.06364, 2023a. doi: 10.48550/arXiv.2304.06364. URL https://doi.org/ 10.48550/arXiv.2304.06364. 34 Wanjun Zhong, Lianghong Guo, Qiqi Gao, and Yanlin Wang. MemoryBank: Enhancing large language models with long-term memory. arXiv preprint arXiv:2305.10250, 2023b. | 2309.16609#179 | 2309.16609#181 | 2309.16609 | [
"2305.20050"
] |
2309.16609#181 | Qwen Technical Report | Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Huai hsin Chi. Least-to-most prompting enables complex reasoning in large language models. ArXiv, abs/2205.10625, 2022. 35 A APPENDIX A.1 MORE TRAINING DETAILS A.1.1 DATA FORMAT FOR QWEN-CHAT Different from conventional pretraining based on autoregressive next-token prediction, despite using a similar training task, there should be a specially design data format for SFT and RLHF to build a conversational AI assistant model. Common formats include â human-assistantâ and ChatML formats. As to our knowledge, one of the earliest examples of the human-assistant format comes from Anthropic (Bai et al., 2022b), which adds a special phrase â human: â in front of the user input and â assistant: â in front of the assistant response. | 2309.16609#180 | 2309.16609#182 | 2309.16609 | [
"2305.20050"
] |
2309.16609#182 | Qwen Technical Report | It is easy for the base language model to transfer to the pattern of conversational AI. However, as the specific phrases are common words, it might be hard for the model to disambiguate from these words in other contexts. Instead, we turned to the ChatML format proposed by OpenAI.5 This format allows the use of special tokens, i.e., â <im_start>â and â <im_end>â , that do not appear in pretraining, and thus resolve the aforementioned problem. | 2309.16609#181 | 2309.16609#183 | 2309.16609 | [
"2305.20050"
] |
2309.16609#183 | Qwen Technical Report | We demonstrate an example of the format below. # ChatML Format <| i m s t a r t |> s y s t e m You a r e a h e l p f u l <| i m s t a r t |> u s e r H e l l o ! <| i m e n d |> <| i m s t a r t |> a s s i s t a n t H e l l o ! How c a n I a s s i s t a n t . <| i m e n d |> a s s i s t you t o d a y ? <| i m e n d |> | 2309.16609#182 | 2309.16609#184 | 2309.16609 | [
"2305.20050"
] |
2309.16609#184 | Qwen Technical Report | A.2 EVALUATION A.2.1 AUTOMATIC EVALUATION To provide a whole picture of the performance of our model series QWEN, here in this section we illustrate the detailed performance of our models as well as the baselines in the comprehensive benchmark evaluation proposed by OpenCompass Team (2023). We report the results in multiple tables based on the officially provided categories, including examination, language, knowledge, understanding, and reasoning. In terms of the performance of the baseline models, we report the higher results between the reported ones and those on the leaderboard. Examination Here we evaluate the models on a series of datasets relevant to the examination. The datasets include: | 2309.16609#183 | 2309.16609#185 | 2309.16609 | [
"2305.20050"
] |
2309.16609#185 | Qwen Technical Report | â ¢ MMLU (Hendrycks et al., 2020) Massive Multi-task Language Understanding is designed for measuring language understanding capabilities. We report 5-shot results. â ¢ C-Eval (Huang et al., 2023) C-Eval is a Chinese evaluation dataset spanning 52 diverse disciplines. We report 5-shot results. â ¢ CMMLU (Li et al., 2023c) CMMLU is designed for assessing language understanding capabilities in Chinese. | 2309.16609#184 | 2309.16609#186 | 2309.16609 | [
"2305.20050"
] |
2309.16609#186 | Qwen Technical Report | We report 5-shot results. â ¢ AGIEval (Zhong et al., 2023a) This is a benchmark consisting of human-centric examina- tions, including college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We report zero-shot results. â ¢ Gaokao-Bench (Zhang et al., 2023b) This is a benchmark with Gaokao (Chinese college- entrance examination) questions. We report zero-shot results. | 2309.16609#185 | 2309.16609#187 | 2309.16609 | [
"2305.20050"
] |
2309.16609#187 | Qwen Technical Report | â ¢ ARC (Clark et al., 2018) ARC is a dataset consisting of grade-school level, multiple-choice science questions. It includes an easy set and a challenge set, which are referred by ARC-e and ARC-c. We report zero-shot results. 36 Table 13: Results on MMLU. All are tested with five-shot accuracy. We provide the reported results of the other models for comparison. Model Params Average STEM Social Sciences Humanities Others MPT 7B 30B 26.8 46.9 25.3 39.0 27.1 52.8 26.7 44.5 28.2 52.9 Falcon 7B 40B 26.2 55.4 26.2 45.5 24.7 65.4 26.4 49.3 27.4 65.0 ChatGLM2 6B 12B 47.9 56.2 41.2 48.2 54.4 65.1 43.7 52.6 54.5 60.9 InternLM 7B 51.0 - - - - Baichuan2 7B 13B 54.2 59.2 - - - - - - - - XVERSE 13B 55.1 44.5 64.4 50.5 62.9 LLaMA 7B 13B 33B 65B 35.1 46.9 57.8 63.4 30.5 35.8 46.0 51.7 38.3 53.8 66.7 72.9 34.0 45.0 55.8 61.8 38.1 53.3 63.4 67.4 LLAMA 2 7B 13B 34B 70B 45.3 54.8 62.6 68.9 36.4 44.1 52.1 58.0 51.2 62.6 71.8 80.3 42.9 52.8 59.4 65.0 52.2 61.1 69.2 74.6 QWEN 1.8B 7B 14B 44.6 58.2 66.3 39.6 50.2 59.4 50.0 68.6 76.2 40.4 52.5 60.9 51.0 64.9 71.8 | 2309.16609#186 | 2309.16609#188 | 2309.16609 | [
"2305.20050"
] |
2309.16609#188 | Qwen Technical Report | Table 14: Leaderboard results of C-Eval. We include the results of both proprietary models and open-source models. Note that there are a number of models on the leaderboard with very few details, in terms of proprietary models, we only report the results of GPT-3.5, GPT-4, InternLM and ChatGLM2. Model Params Avg. Avg. (Hard) STEM Social Sciences Humanities Others Proprietary models GPT-3.5 - 54.4 41.4 52.9 61.8 50.9 53.6 GPT-4 - 68.7 54.9 67.1 77.6 64.5 67.8 InternLM 123B 68.8 50.0 63.5 81.4 72.7 63.0 ChatGLM2 - 71.1 50.0 64.4 81.6 73.7 71.3 Open-source models ChatGLM2 6B 51.7 37.1 48.6 60.5 51.3 49.8 InternLM 7B 52.8 37.1 48.0 67.4 55.4 45.8 Baichuan2 7B 13B 54.0 58.1 - - - - - - - - - - XVERSE 13B 54.7 33.5 45.6 66.2 58.3 56.9 QWEN 1.8B 7B 14B 54.7 63.5 72.1 41.8 46.4 53.7 50.8 57.7 65.7 69.9 78.1 85.4 56.3 66.6 75.3 46.2 57.8 68.4 In terms of MMLU, we report the detailed results in Table 13. In terms of C-Eval, we report the results in Table 14. For the rest of the datasets, we report the results in Table 15. Note that AGIEval includes | 2309.16609#187 | 2309.16609#189 | 2309.16609 | [
"2305.20050"
] |
2309.16609#189 | Qwen Technical Report | # 5https://github.com/openai/openai-python/blob/main/chatml.md 37 Table 15: Results on the other datasets of examination. Specifically, we report the results on CMMLU, AGIEval, ARC-e, and ARC-c. Model Params CMMLU AGIEval Gaokao-Bench ARC-e ARC-c MPT 7B 25.9 21.3 19.8 70.2 42.6 Falcon 7B - - - 70.0 42.4 ChatGLM2 6B 49.3 39.0 46.4 73.0 61.0 InternLM 7B 20B 51.8 59.0 36.9 44.6 43.0 45.5 78.7 86.1 69.5 81.7 Baichuan2 7B 13B 57.1 62.0 42.7 48.2 47.5 54.3 54.7 61.9 32.5 38.0 LLaMA 7B 13B 33B 65B 26.8 31.5 36.0 40.6 20.6 22.0 33.5 33.9 21.3 20.4 18.9 19.1 72.8 74.8 80.0 80.6 47.6 52.7 67.5 69.5 LLAMA 2 7B 13B 70B 31.8 38.4 53.6 21.8 30.9 40.2 18.9 18.2 23.3 75.2 77.3 85.9 45.9 60.3 78.3 StableBeluga2 70B 51.8 41.6 40.9 91.2 86.1 QWEN 1.8B 7B 14B 49.3 62.2 71.0 36.9 45.8 52.3 44.9 52.5 61.9 71.6 84.0 90.3 53.2 75.3 84.4 | 2309.16609#188 | 2309.16609#190 | 2309.16609 | [
"2305.20050"
] |
2309.16609#190 | Qwen Technical Report | the parts of Chinese and English, while LLAMA 2 only reported the results in the English part, so we use the results on OpenCompass. Additionally, while CMMLU, AGIEval, and Gaokao-Bench are related to Chinese, and MPT, Falcon, and the LLaMA series were not optimized for Chinese, these models achieved low performance on the datasets. Knowledge and Understanding Here we evaluate the models on a series of datasets relevant to knowledge and natural language understanding. The datasets include | 2309.16609#189 | 2309.16609#191 | 2309.16609 | [
"2305.20050"
] |
2309.16609#191 | Qwen Technical Report | â ¢ BoolQ (Clark et al., 2019) This is a QA dataset, where the questions are about passages of Wikipedia, and the model should answer yes or no to the given possible answer. We report zero-shot results. â ¢ CommonsenseQA (Talmor et al., 2019) This is a dataset of multiple-choice question answering that asseses the understanding of commonsense knowledge. We report 8-shot results. â ¢ NaturalQuestions (Kwiatkowski et al., 2019) It is a dataset of QA where the questions are from users and the answers are verified by experts. | 2309.16609#190 | 2309.16609#192 | 2309.16609 | [
"2305.20050"
] |
2309.16609#192 | Qwen Technical Report | We report zero-shot results. â ¢ LAMBADA (Paperno et al., 2016) This is dataset to evaluate language understanding by word prediction. It consists of passages related to human subjects. We report zero-shot results. We report the results in Table 16. Reasoning We report the evaluation results on the datasets concerning reasoning, focusing on natural language reasoning. For the others, such as mathematics and coding, as we have illustrated detailed results, here we do not report those results repeatedly. The datasets for evaluation include: | 2309.16609#191 | 2309.16609#193 | 2309.16609 | [
"2305.20050"
] |
2309.16609#193 | Qwen Technical Report | â ¢ HellaSwag (Zellers et al., 2019) This is a commonsense natural language inference (NLI) dataset, where the questions are easy for humans but struggling for previous language models. We report zero-shot results. â ¢ PIQA (Bisk et al., 2020) This is an NLI dataset assessing the physical knowledge. We report zero-shot results. 38 Table 16: Results on the datasets concerning knowledge and understanding. Specifically, we report the results on BoolQ, CommonsenseQA, NaturalQuestions, and LAMBADA. | 2309.16609#192 | 2309.16609#194 | 2309.16609 | [
"2305.20050"
] |
2309.16609#194 | Qwen Technical Report | Model Params BoolQ CommonsenseQA NaturalQuestions LAMBADA MPT 7B 75.0 61.8 11.6 70.0 Falcon ChatGLM2 7B 6B 67.5 79.0 20.8 65.4 15.7 9.7 - 54.3 InternLM 7B 20B 64.1 87.5 59.8 70.6 8.9 25.2 67.0 71.8 XVERSE 13B 64.2 62.2 0.3 48.2 Baichuan2 7B 13B 63.2 67.0 63.0 65.6 9.4 16.3 73.3 74.0 LLaMA 7B 13B 33B 65B 76.5 78.7 84.4 86.6 64.9 67.4 72.5 74.1 16.8 20.2 30.9 33.4 73.3 75.2 77.2 77.7 LLAMA 2 7B 13B 70B 77.4 82.4 87.7 66.5 67.3 78.5 19.1 24.9 34.2 73.3 76.5 78.9 StableBeluga2 70B 89.4 72.6 25.1 71.3 QWEN 1.8B 7B 14B 68.0 76.4 86.2 60.1 66.8 70.3 3.2 17.4 23.9 58.4 67.9 71.1 Table 17: Results on the datasets related to natural language reasoning. Specifically, we report the results on HellaSwag, PIQA, SIQA, and OCNLI. | 2309.16609#193 | 2309.16609#195 | 2309.16609 | [
"2305.20050"
] |
2309.16609#195 | Qwen Technical Report | Model Params HellaSwag PIQA SIQA OCNLI MPT 7B 76.4 80.6 48.5 30.0 Falcon 7B 74.1 76.7 47.2 - ChatGLM2 6B 57.0 69.6 64.3 33.1 InternLM 7B 20B 70.6 78.1 77.9 80.3 60.5 72.8 37.5 42.5 Baichuan2 7B 13B 67.0 70.8 76.2 78.1 44.4 44.3 30.3 30.0 LLaMA 7B 13B 33B 65B 76.1 79.2 82.8 84.2 79.8 80.1 82.3 82.8 48.9 52.5 57.8 61.2 33.6 32.1 30.7 44.9 LLAMA 2 7B 13B 70B 77.2 80.7 85.3 78.8 80.5 82.8 48.5 54.8 64.8 32.1 34.1 46.5 StableBeluga2 70B 84.1 83.3 78.1 48.3 QWEN 1.8B 7B 14B 56.7 75.1 80.2 73.3 77.9 79.9 56.1 69.9 77.9 39.0 47.4 57.9 â ¢ SIQA (Sap et al., 2019) This is an NLI dataset evaluating social commonsense intelligence. | 2309.16609#194 | 2309.16609#196 | 2309.16609 | [
"2305.20050"
] |
2309.16609#196 | Qwen Technical Report | We report zero-shot results. â ¢ OCNLI (Hu et al., 2020) This is an NLI dataset focusing on Chinese. We report zero-shot results. 39 We report the results in Table 17. A.2.2 HUMAN EVALUATION In this section, we demonstrate the cases of human analysis. In our self-constructed evaluation dataset, the instructions are either manually written data or manual revised from public datasets, such as CLiB6, C-Eval (Huang et al., 2023), FacTool (Chern et al., 2023b), LeetCode7), etc. In terms of each case, we demonstrate the responses and Elo ratings8 of all models for comparison. Specifically, as the data in our human evaluation are in Chinese, we also provide their translations in English. | 2309.16609#195 | 2309.16609#197 | 2309.16609 | [
"2305.20050"
] |
2309.16609#197 | Qwen Technical Report | Knowledge å äº¬å¤§å ¦ç å èº«æ ¯ä» ä¹ å ¦æ ¡ï¼ What was the predecessor of Peking University? Responses Model Qwen-7B-Chat (SFT) å äº¬å¤§å ¦ç å èº«æ ¯äº¬å¸ å¤§å ¦å ï¼ å® å å»ºäº 1898å¹´ï¼ æ ¯ä¸ å ½è¿ | 2309.16609#196 | 2309.16609#198 | 2309.16609 | [
"2305.20050"
] |
2309.16609#198 | Qwen Technical Report | 代 Response Elo 1037 ç¬¬ä¸ æ å ½ç« å¤§å ¦ï¼ ä¹ æ ¯æ æ ©ä»¥â å¤§å ¦â å ½å ç é« ç å ¦åº ã The predecessor of Peking University was the Imperial University of Peking, established in 1898. It was the first national university in modern China and also the earliest higher education institution to be named a â universityâ . | 2309.16609#197 | 2309.16609#199 | 2309.16609 | [
"2305.20050"
] |
2309.16609#199 | Qwen Technical Report | Qwen-14B-Chat (SFT) å äº¬å¤§å ¦ç å èº«æ ¯äº¬å¸ å¤§å ¦å ï¼ å å»ºäº 1898å¹´ï¼ æ ¯ä¸ å ½è¿ ä»£ç¬¬ ä¸ æ å ½ç« å¤§å ¦ã The predecessor of Peking University is the Imperial University of Peking, founded in 1898 and the first national university in modern China. 1011 Qwen-14B-Chat (RLHF)å 京大å | 2309.16609#198 | 2309.16609#200 | 2309.16609 | [
"2305.20050"
] |
2309.16609#200 | Qwen Technical Report | ¦ç å èº«æ ¯äº¬å¸ å¤§å ¦å ï¼ å å»ºäº 1898å¹´ï¼ æ ¯ä¸ å ½è¿ ä»£ç¬¬ ä¸ æ å ½ç« ç»¼å æ §å¤§å ¦ã äº¬å¸ å¤§å ¦å æ å è®¾ç« ç å ¦ç§ æ ç» å ¦ã æ å ¦ã å ²å ¦ã å ²å ¦ã æ³ å¾ ã æ è ²ã å å ¦ã å·¥å ¦ã å å ¦ã å »å ¦ç ï¼ å æ ¥é æ¸ æ ©å¤§å °ç¤¾ä¼ ç§ å ¦ã è ªç ¶ç§ å ¦ã å·¥ç¨ æ æ ¯ã è ºæ ¯ç å¤ ä¸ª é¢ å ã | 2309.16609#199 | 2309.16609#201 | 2309.16609 | [
"2305.20050"
] |
2309.16609#201 | Qwen Technical Report | 1912å¹´ï¼ äº¬å¸ å¤§å ¦å æ ¹å 为å äº¬å¤§å ¦ï¼ æ ä¸ºä¸ å ½æ é« å ¦ åº ä¹ ä¸ ï¼ ä¹ æ ¯ä¸ å ½ç °ä»£é« ç æ è ²ç å æº å °ä¹ ä¸ ã Peking University, originally known as the Imperial University of Peking, was founded in 1898 and is the first comprehensive national university in modern China. The initial disciplines established at the Imperial University of Peking included classics, literature, his- tory, philosophy, law, education, agriculture, engineering, business, medicine, etc., gradually expanding to encompass social sciences, natural sciences, engineering and technology, arts, and other fields. In 1912, the Imperial University of Peking changed its name to Peking University, becoming one of the top universities in China and also one of the birthplaces of modern higher education in China. 1090 | 2309.16609#200 | 2309.16609#202 | 2309.16609 | [
"2305.20050"
] |
2309.16609#202 | Qwen Technical Report | GPT-3.5 å äº¬å¤§å ¦ç å èº«æ ¯å ½ç« å äº¬å¤§å ¦ï¼ å»ºäº 1898å¹´ï¼ æ ¯ä¸ å ½è¿ ä»£ç¬¬ ä¸ æ å ½ç« å¤§å ¦ã å ¶æ ç« å è¡·æ ¯ä¸ºäº å ¹å »å ½å®¶ç æ ¿æ²»ã ç» æµ ã å äº å å¤ äº¤äººæ ï¼ ä¸ºä¸ å ½ç ç °ä»£å è¿ ç¨ å å ºè´¡ç ®ã | 2309.16609#201 | 2309.16609#203 | 2309.16609 | [
"2305.20050"
] |
2309.16609#203 | Qwen Technical Report | 910 6https://github.com/jeinlee1991/chinese-llm-benchmark 7https://leetcode.cn/ 8https://en.wikipedia.org/wiki/Elo rating system 40 The predecessor of Peking University was National Peking University, established in 1898. It is the first national university in modern China. Its original purpose was cultivating talents in politics, economics, military, and diplomacy and contributing to Chinaâ s modernization process. # GPT-4 # å äº¬å¤§å ¦ç å èº«æ ¯äº¬å¸ å¤§å ¦å ã | 2309.16609#202 | 2309.16609#204 | 2309.16609 | [
"2305.20050"
] |
2309.16609#204 | Qwen Technical Report | The predecessor of Peking University is the Imperial University of Peking. Knowledge ä¸ ç ä¸ ç °å æ é« ç æ ¨å¡ æ ¯ä¸ å ½ä» ä¹ å¡ ï¼ What is the tallest wooden tower in the world that still exists in China? Responses Model Qwen-7B-Chat (SFT) ä¸ ç ä¸ ç °å æ é« ç æ ¨å¡ æ ¯ä¸ å ½å±±è¥¿åº å ¿æ ¨å¡ ã | 2309.16609#203 | 2309.16609#205 | 2309.16609 | [
"2305.20050"
] |
2309.16609#205 | Qwen Technical Report | Response The tallest existing wooden tower in the world is the Yinxian Wooden Tower in Shanxi, China. Qwen-14B-Chat (SFT) ç °å æ é« ç æ ¨å¡ æ ¯ä¸ å ½å±±è¥¿åº å ¿æ ¨å¡ ï¼ å ¨å ä¸ºä½ å®«å¯ºé 迦 å¡ ã åº å ¿æ ¨å¡ ä½ äº | 2309.16609#204 | 2309.16609#206 | 2309.16609 | [
"2305.20050"
] |
2309.16609#206 | Qwen Technical Report | å±±è¥¿ç æ å· å¸ åº å ¿å ¿å å 西å è§ ç ä½ å®«å¯º å ï¼ å»ºäº è¾½ä»£æ¸ å® äº å¹´ï¼ 1056å¹´ï¼ ï¼ é æ æ å å¹´ï¼ 1195å¹´ï¼ å¢ ä¿®å® æ¯ ï¼ æ ¯ä¸ å ½ç °å æ é« æ å ¤ç ä¸ åº§æ ¨æ å¡ å¼ å»ºç ï¼ ä¹ æ ¯ ä¸ ç ç °å æ é« ç æ ¨ç» æ 建ç ã | 2309.16609#205 | 2309.16609#207 | 2309.16609 | [
"2305.20050"
] |
2309.16609#207 | Qwen Technical Report | The tallest existing wooden tower in the world is the Yinxian Wooden Tower, also known as the Shijia Tower of the Fogong Temple. It is inside the Fogong Temple in Yinxian, Shanxi Province, China. The tower was originally built in the second year of the Qingning era of the Liao Dynasty (1056 AD) and completed its expansion in the sixth year of the Jin Dynastyâ s Mingchang era (1195 AD). It is the tallest and oldest existing wooden tower in China and the tallest wooden structure in the world. | 2309.16609#206 | 2309.16609#208 | 2309.16609 | [
"2305.20050"
] |
2309.16609#208 | Qwen Technical Report | Qwen-14B-Chat (RLHF) ä¸ ç ä¸ ç °å æ é« ç æ ¨å¡ æ ¯ä¸ å ½ç åº å ¿æ ¨å¡ ï¼ ä½ äº å±±è¥¿ç åº å ¿ã è¯¥å¡ å»ºäº å ¬å 1056å¹´ï¼ é« 67.31ç±³ï¼ å ±æ 5å± ï¼ æ ¯ä¸ å ½å ¤ ä»£å»ºç æ æ ¯ç æ °å ºä»£è¡¨ä¹ ä¸ ã åº å ¿æ ¨å¡ ç 建ç é£ æ ¼ç ¬ç ¹ï¼ é ç ¨äº è®¸å¤ å ¤ä»£å»ºç æ æ ¯ï¼ å¦ æ¦«å ¯ç» æ ã æ æ ±ã æ ¬æ ç ï¼ å ·æ å¾ é« ç è ºæ ¯ä»·å ¼å å å ²ä»·å ¼ã åº å ¿æ ¨å¡ ä¹ æ ¯ä¸ å ½æ å é äº§ä¿ æ ¤ç é ç ¹ä¹ ä¸ ï¼ è¢«å ä¸ºå ¨å ½é ç ¹æ ç ©ä¿ æ ¤å ä½ å ä¸ ç æ å é 产ã | 2309.16609#207 | 2309.16609#209 | 2309.16609 | [
"2305.20050"
] |
2309.16609#209 | Qwen Technical Report | The tallest existing wooden tower in the world is the Yinxian Wooden Tower in Yinxian, Shanxi Province, China. It was built in the year 1056 AD and stands at a height of 67.31 meters, consisting of 5 floors. This tower is an outstanding representative of ancient Chinese architectural technology. The Yinxian Wooden Tower showcases a unique architectural style and incorporates various ancient construction techniques such as mortise and tenon joints, bracket sets, and cantilever beams, possessing high artistic and historical value. It is also a key cultural heritage site in China, listed as a national key cultural relic protection unit and a UNESCO World Cultural Heritage site. | 2309.16609#208 | 2309.16609#210 | 2309.16609 | [
"2305.20050"
] |
2309.16609#210 | Qwen Technical Report | GPT-3.5 Elo 987 1055 1060 864 ä¸ ç ä¸ ç °å æ é« ç æ ¨å¡ æ ¯ä¸ å ½ç æ ®é å±±æ ®æµ å¯ºå¡ ï¼ ä¹ è¢«ç§°ä¸º å ¤ä½ é¡¶å® å¡ ã è¯¥å¡ å»ºäº 1056å¹´ï¼ é« çº¦70.28ç±³ã | 2309.16609#209 | 2309.16609#211 | 2309.16609 | [
"2305.20050"
] |
2309.16609#211 | Qwen Technical Report | The tallest existing wooden tower in the world is the Putuoshan Pujisi Pagoda, also known as the Ancient Baoding Pagoda, located in Putuoshan, China. It was built in the year 1056 AD and stands at a height of approximately 70.28 meters. 41 955 GPT-4 ä¸ ç ä¸ ç °å æ é« ç æ ¨å¡ æ ¯ä¸ å ½ç åº å ¿æ ¨å¡ ï¼ ä¹ è¢«ç§°ä¸ºä½ å®«å¯º å¤§ä½ å¡ ã | 2309.16609#210 | 2309.16609#212 | 2309.16609 | [
"2305.20050"
] |
2309.16609#212 | Qwen Technical Report | The tallest wooden pagoda currently in existence in the world is the Yingxian Wooden Pagoda in China, also known as the Pagoda of Fogong Temple. 1040 Language Understanding å è ¡è ä¸ å è ¡è ç ®ç æ ¼é ³æ ä¹ å ï¼ What is the Pingyin for â å è ¡è ä¸ å è ¡è ç ®â ? Responses Model Qwen-7B-Chat (SFT) å è ¡è ä¸ å è ¡è ç ®ç æ ¼é ³æ ¯ï¼ â ch¯ı p´u b`u tË u p´u p´ıâ ã | 2309.16609#211 | 2309.16609#213 | 2309.16609 | [
"2305.20050"
] |
2309.16609#213 | Qwen Technical Report | Response To eat grapes without spitting out the grape skins, the pinyin is â ch¯ı p´u b`u tË u p´u p´ıâ . Qwen-14B-Chat (SFT) æ ¼é ³ï¼ å è ¡è ä¸ å è ¡è ç ®â ch¯ı p´u t´ao b`u tË u p´u t´ao p´ıâ | 2309.16609#212 | 2309.16609#214 | 2309.16609 | [
"2305.20050"
] |
2309.16609#214 | Qwen Technical Report | Pingyin: To eat grapes without spitting out the grape skins, â ch¯ı p´u b`u tË u p´u p´ıâ . Qwen-14B-Chat (RLHF) å è ¡è ä¸ å è ¡è ç ®ç æ ¼é ³æ ¯ï¼ ch¯ı p´u t´ao b`u tË u p´u t´ao pÂ´Ä±ã æ ¼é ³ æ ¯ä¸ ç§ ç ¨æ ä¸ å æ¯ è¡¨ç¤ºæ± å å é ³ç æ ¹æ³ ï¼ å® å ¯ä»¥å¸®å ©äººä»¬å ¦ ä¹ å è®°å¿ æ± å ç å é ³ã å ¨è¿ ä¸ªæ ¼é ³ä¸ ï¼ â å â 表示â ch¯ıâ ï¼ â è ¡ è â 表示â p´u t´aoâ ï¼ â ä¸ â 表示â b`uâ ï¼ â å â 表示â tË uâ ï¼ â è ¡è ç ®â 表 示â p´u t´ao p´ıâ ã The pinyin for â å è ¡è ä¸ å è ¡è ç ®â is â ch¯ı p´u t´ao b`u tË u p´u t´ao p´ıâ | 2309.16609#213 | 2309.16609#215 | 2309.16609 | [
"2305.20050"
] |
2309.16609#215 | Qwen Technical Report | . Pinyin is a method of representing the pronunciation of Chinese characters using Latin letters, which can help people learn and remember the pronunciation of Chinese characters. In this pinyin, â å â is represented as â ch¯ıâ , â è ¡è â is represented as â p´u t´aoâ , â ä¸ â is represented as â b`uâ , â å â is represented as â tË uâ , and â è ¡è ç ®â is represented as â p´u t´ao p´ıâ . GPT-3.5 ch¯ı p´u t´ao b´u tË u p´u t´ao p´ı de p¯ın y¯ın zË en me xiË eï¼ ch¯ı p´u t´ao b´u tË u p´u t´ao p´ı de p¯ın y¯ın zË en me xiË eï¼ | 2309.16609#214 | 2309.16609#216 | 2309.16609 | [
"2305.20050"
] |
2309.16609#216 | Qwen Technical Report | GPT-4 ch¯ı p´u t´ao b`u tË u p´u t´ao p´ı ch¯ı p´u t´ao b`u tË u p´u t´ao p´ı Elo 914 1049 1068 946 1040 Language Understanding ç ¶äº²ï¼ å ¿å ï¼ ä½ é« ä¸ æ¯ ä¸ äº ï¼ ç °å ¨æ ç® ä¸ å ªä¸ªå¤§å ¦ï¼ å ¿å ï¼ ç ¸ç ¸ï¼ æ å·²ç» å å¤ å¥½äº ç ³è¯·èµ æ ï¼ æ ³ç ³è¯·ä¸ ä¸ªå¥½ç æ ¬ç§ ä¸ ä¸ ã ç ¶äº²ï¼ é £ä½ å·²ç» æ ³å¥½ç ³è¯·å ªäº å¤§å ¦äº å ï¼ å ¿å ï¼ æ ¯ç ï¼ æ 计å ç ³è¯·å æ æ å é å ç å¤§å ¦ã æ å·²ç» å äº å¾ å¤ ç ç©¶ï¼ å å¤ å ¨å ¦ä¸ ã 社交å è ä¸ è§ å ç æ ¹é ¢å ¨é ¢æ å è ªå·±ã ç ¶äº²ï¼ å¾ å¥½ï¼ ä½ ä¸ ç ´é ½å¾ å ªå ï¼ ç ¸ç ¸å¦ å¦ ä¸ ç ´é ½å¾ è ªè±ªã ä¸ å¤§å ¦æ ¯ä¸ ä¸ªæ °ç èµ·ç ¹ï¼ ä½ é è¦ é ¢å¯¹ æ ´å¤ ç æ æ å æ ºä¼ ã ä½ æ ä» ä¹ æ ç® ï¼ å ¿å ï¼ æ 计å å°½æ ©é åº å¤§å ¦ç æ´»ï¼ å å å ç§ ç¤¾å ¢å | 2309.16609#215 | 2309.16609#217 | 2309.16609 | [
"2305.20050"
] |
2309.16609#217 | Qwen Technical Report | è¯¾å¤ æ´»å ¨ï¼ æ å®½è ªå·±ç è§ é å 人é å ã å æ ¶ï¼ æ ä¼ è®¤ç å ¦ä¹ ä¸ ä¸ ç ¥è¯ å æ è ½ï¼ å ªå å å¾ å¥½æ ç»©ï¼ äº å è ·å¾ å¥ å ¦é å å ¶ä» è £èª ã ç ¶äº²ï¼ é å¸¸å¥½ï¼ ä½ ç 计å å¾ æ æ ºï¼ ç ¸ç ¸ç ¸ä¿¡ä½ ä¸ å® ä¼ æ ä¸ ä¸ªå å® è æ æ ä¹ ç å¤§å ¦ç æ´»ã ç ¸ç ¸ä¼ ä¸ ç ´æ ¯æ ä½ ç å ³å® ï¼ å¸ æ ä½ å å¾ å¥½æ ç»©ï¼ ä¸ºä½ ç æ ªæ ¥æ ä¸ å å® ç å ºç¡ ã å ¿å ï¼ è°¢è°¢ç ¸ç ¸ï¼ æ ä¼ å å å ªå ï¼ ä¸ è¾ è´ æ ¨å¯¹æ ç æ æ ã è¿ æ®µå¯¹è¯ ä¸ ï¼ å ¿å æ ç® ä¸ å¤§å ¦å æ å ªäº è®¡å å æ ç® ï¼ | 2309.16609#216 | 2309.16609#218 | 2309.16609 | [
"2305.20050"
] |
2309.16609#218 | Qwen Technical Report | Father: Son, you have graduated from high school. Which university are you planning to attend now? Son: Dad, I have already prepared my application materials, and I want to apply for an excellent under- graduate program. 42 Father: Have you already decided which universities you want to apply to? Son: Yes, I plan to apply to several top-ranked universities. I have done a lot of research, and I am ready to improve myself in academics, social life, and career planning. | 2309.16609#217 | 2309.16609#219 | 2309.16609 | [
"2305.20050"
] |
2309.16609#219 | Qwen Technical Report | Father: Thatâ s great! You have always been hardworking and your mom and I are proud of you. Going to college is a new starting point, and you will face more challenges and opportunities. What are your plans? Son: I plan to adapt to college life as soon as possible and participate in various clubs and extracurricular activities to broaden my horizons and social circle. At the same time, I will study my major knowledge and skills seriously, strive for good grades, and try to win scholarships and other honors. | 2309.16609#218 | 2309.16609#220 | 2309.16609 | [
"2305.20050"
] |
2309.16609#220 | Qwen Technical Report | Father: Very good! Your plans are wise. Dad believes that you will have a fulfilling and meaningful college life. Dad will always support your decisions and hopes you achieve good results, laying a solid foundation for your future. Son: Thank you, Dad! I will work even harder and not let your expectations down. What are the sonâ s plans and intentions after going to college in this conversation? Responses Model Qwen-7B-Chat (SFT) å ¿å æ ç® å°½æ ©é åº å¤§å ¦ç æ´»ï¼ å å å ç§ ç¤¾å ¢å è¯¾å¤ æ´»å ¨ï¼ æ å®½è ªå·±ç è§ é å 人é å ã å æ ¶ï¼ ä» ä¼ è®¤ç å ¦ä¹ ä¸ ä¸ ç ¥è¯ å æ è ½ï¼ å ªå å å¾ å¥½æ ç»©ï¼ äº å è ·å¾ å¥ å ¦é å å ¶ä» è £èª ã | 2309.16609#219 | 2309.16609#221 | 2309.16609 | [
"2305.20050"
] |
2309.16609#221 | Qwen Technical Report | The son plans to adapt to university life as soon as possible, par- ticipate in various clubs and extracurricular activities, and expand his horizons and social circle. At the same time, he will study professional knowledge and skills seriously, work hard to achieve good grades, and strive to obtain scholarships and other honors. Response Qwen-14B-Chat (SFT) å ¿å æ ç® å ¨å ¦ä¸ ã 社交å è ä¸ è§ å ç æ ¹é ¢å ¨é ¢æ å è ªå·±ï¼ å°½ æ ©é åº å¤§å ¦ç æ´»ï¼ å å å ç§ ç¤¾å ¢å è¯¾å¤ æ´»å ¨ï¼ æ å®½è ªå·±ç è§ é å 人é å ã å æ ¶ï¼ ä» ä¼ è®¤ç å ¦ä¹ ä¸ ä¸ ç ¥è¯ å æ è ½ï¼ å ªå å å¾ å¥½æ ç»©ï¼ äº å è ·å¾ å¥ å ¦é å å ¶ä» è £èª ã | 2309.16609#220 | 2309.16609#222 | 2309.16609 | [
"2305.20050"
] |
2309.16609#222 | Qwen Technical Report | The son plans to comprehensively improve himself in his academic, social, and career planning aspects to adapt to university life as early as possible. He will participate in various clubs and extracurricular activities to broaden his horizons and expand his social circle. At the same time, he will diligently study professional knowledge and skills, strive for good grades, and obtain scholarships and other honors. Qwen-14B-Chat (RLHF) å ¿å æ ç® å ¨å¤§å ¦ç æ´»ä¸ å å å ç§ ç¤¾å ¢å è¯¾å¤ æ´»å ¨ï¼ æ å®½è ªå·± ç è§ é å 人é å ã å æ ¶ï¼ ä» ä¼ è®¤ç å ¦ä¹ ä¸ ä¸ ç ¥è¯ å æ è ½ï¼ å ª å å å¾ å¥½æ ç»©ï¼ äº å è ·å¾ å¥ å ¦é å å ¶ä» è £èª ã ä» ç ç ®æ æ ¯å°½ æ ©é åº å¤§å ¦ç æ´»ï¼ ä¸ºè ªå·±ç æ ªæ ¥æ ä¸ å å® ç å ºç¡ ã | 2309.16609#221 | 2309.16609#223 | 2309.16609 | [
"2305.20050"
] |
2309.16609#223 | Qwen Technical Report | The son plans to participate in various clubs and extracurricular activities during his college life, in order to broaden his horizons and expand his social circle. At the same time, he will study professional knowledge and skills seriously, strive for good grades, and aim to obtain scholarships and other honors. His goal is to adapt to college life as soon as possible and lay a solid foundation for his future. GPT-3.5 Elo 973 1049 975 976 å ¿å 计å é åº å¤§å ¦ç æ´»ï¼ å å å ç§ ç¤¾å ¢å è¯¾å¤ æ´»å ¨ï¼ æ å®½è ª å·±ç è§ é å 人é å ã å æ ¶ï¼ ä» è¿ è®¡å 认ç å ¦ä¹ ä¸ ä¸ ç ¥è¯ å æ è ½ï¼ å ªå å å¾ å¥½æ ç»©ï¼ äº å è ·å¾ å¥ å ¦é å å ¶ä» è £èª ã ä» å¸ æ å ¨å¤§å ¦æ ä¸ å å® ç å ºç¡ ï¼ ä¸ºè ªå·±ç æ ªæ ¥æ å¥½å ºç¡ ã | 2309.16609#222 | 2309.16609#224 | 2309.16609 | [
"2305.20050"
] |
2309.16609#224 | Qwen Technical Report | The son plans to adapt to college life participate in various clubs and extracurricular activities to broaden his horizons and social circle. At the same time, he plans to study professional knowledge and skills seriously, strive for good grades, and obtain scholarships and other honors. He hopes to lay a solid foundation in college and build a good foundation for his future. The son plans to adapt to college life participate in various clubs and extracurricular activities to broaden his horizons and social circle. At the same time, he plans to study professional knowledge and skills seriously, strive for good grades, and obtain scholarships and other honors. He hopes to lay a solid foundation in college and build a good foundation for his future. | 2309.16609#223 | 2309.16609#225 | 2309.16609 | [
"2305.20050"
] |
2309.16609#225 | Qwen Technical Report | 43 GPT-4 å ¿å ç å¤§å ¦è®¡å å æ ç® å æ ¬ï¼ 1. å°½æ ©é åº å¤§å ¦ç æ´»ã 2. å å å ç§ ç¤¾å ¢å è¯¾å¤ æ´»å ¨ï¼ ä»¥æ å®½è ªå·±ç è§ é å 人é å ã 3. 认ç å ¦ä¹ ä¸ ä¸ ç ¥è¯ å æ è ½ï¼ å ªå å å¾ å¥½æ 绩ã 4. äº å è ·å¾ å¥ å ¦é å å ¶ä» è £èª ã 5. ä¸ºæ ªæ ¥æ ä¸ å å® ç å ºç¡ ã The sonâ | 2309.16609#224 | 2309.16609#226 | 2309.16609 | [
"2305.20050"
] |
2309.16609#226 | Qwen Technical Report | s university plans and aspirations include: 1. Adjusting to university life as soon as possible. 2. Joining various clubs and extracurricular activities to broaden his horizons and social circle. 3. Studying professional knowledge and skills seriously and striv- ing for excellent grades. 4. Striving to obtain scholarships and other honors. 5. Building a solid foundation for the future. 1048 Responses Model Qwen-7B-Chat (SFT) æ | 2309.16609#225 | 2309.16609#227 | 2309.16609 | [
"2305.20050"
] |
2309.16609#227 | Qwen Technical Report | ä¸ å¤©ï¼ å° æ å »è¶ å¸ ä¹°æ°´æ ï¼ ä» ä¹°äº ä¸ ç¯®å è ¹æ ã å½ ä» å å ° å®¶ï¼ ä» å ç °è ¹æ é ½æ ¯çº¢è ²ç ï¼ ä» æ ³ï¼ è¿ äº è ¹æ ä¸ å® å¾ å¥½å ã ä½ æ ¯ï¼ å½ ä» å¼ å§ å ç æ ¶å ï¼ ä» å ç °è¿ äº è ¹æ é ½æ ¯é ¸ç ã ä» å¾ æ è®¶ï¼ å ä¸ºä» ä¹°ç æ ¯ç è ¹æ ã ä» å »è¶ å¸ é ®ï¼ åº å è¯´ï¼ â å¯¹ä¸ èµ·ï¼ æ 们没æ ç è ¹æ ï¼ å ªæ çº¢è ²ç è ¹æ ã â å° æ å¾ ç æ° ï¼ ä» æ ³ï¼ æ ä¹°ç æ ¯ç è ¹æ ï¼ ä¸ºä» ä¹ ç» æ çº¢è ²ç è ¹æ å ¢ï¼ ä» å å »å è¯ äº å¦ å¦ ï¼ å¦ å¦ ç¬ ç è¯´ï¼ â ä½ ä¹°ç æ ¯çº¢è ¹æ | 2309.16609#226 | 2309.16609#228 | 2309.16609 | [
"2305.20050"
] |
2309.16609#228 | Qwen Technical Report | ... [[The original content is long and the follow up is truncated]] One day, Xiaoming went to the supermarket to buy fruits. He bought a basket of apples. When he got home, he found that all the apples were red. He thought to himself: These apples must be very delicious. However, when he started eating them, he found that these apples were sour. He was surprised because he bought sweet apples. He went to the supermarket to ask, and the shopkeeper said, â | 2309.16609#227 | 2309.16609#229 | 2309.16609 | [
"2305.20050"
] |
2309.16609#229 | Qwen Technical Report | Iâ m sorry, we donâ t have sweet apples, only red apples.â Xiaoming was angry and thought to himself: I bought sweet apples, why did they give me red apples? He went back and told his mother, and his mother smiled and said, â You bought red delicious apples.... [[The original content is long and the follow up is truncated]] Response Elo 849 44 Qwen-14B-Chat (SFT) ä¸ å¤©ï¼ ä¸ ä½ é¡¾å®¢èµ°è¿ è¶ å¸ ï¼ å¯¹åº å è¯´ï¼ â æ è¦ ä¹°ä¸ ä¸ªé¦ è ã â åº å å ç ï¼ â 好ç ï¼ å ç ï¼ æ ä»¬è¿ é æ å ç§ å æ ·ç é¦ è ï¼ æ ¨ æ ³è¦ å ªä¸ ç§ ï¼ â 980 # 顾客å ç ï¼ â æ è¦ ä¸ ä¸ªé» è ²ç é¦ è ã â # åº å æ äº ç æ ï¼ ä½ è¿ æ ¯ç» ä» æ ¿äº ä¸ ä¸ªé» è ²ç é¦ è ã # 顾客å è¯´ï¼ â æ è¦ ä¸ ä¸ªå¼¯æ ²ç é¦ è ã â # åº å æ ´å ç æ äº ï¼ ä½ ä» | 2309.16609#228 | 2309.16609#230 | 2309.16609 | [
"2305.20050"
] |
2309.16609#230 | Qwen Technical Report | è¿ æ ¯ç» ä» æ ¿äº ä¸ ä¸ªå¼¯æ ²ç é¦ è ã æ å ï¼ é¡¾å®¢è¯´ï¼ â æ è¦ ä¸ ä¸ªä¼ è¯´è¯ ç é¦ è ã â åº å å¿ ä¸ ä½ ç¬ äº ï¼ å ç ï¼ â å ç ï¼ æ ä»¬è¿ é æ²¡æ ä¼ è¯´è¯ ç é¦ è ï¼ é £å ªæ ¯ä¸ªè¯¯ä¼ ã â é¡¾å®¢ä¹ ç¬ äº ï¼ å ç ï¼ â æ ç ¥é ï¼ æ å ªæ ¯æ ³å ä½ å¼ ä¸ªç ©ç¬ ã â | 2309.16609#229 | 2309.16609#231 | 2309.16609 | [
"2305.20050"
] |
2309.16609#231 | Qwen Technical Report | One day, a customer walked into a supermarket and said to the shop assistant, â I want to buy a banana.â The assistant replied, â Okay, sir, we have all kinds of bananas here. Which one would you like?â The customer answered, â I want a yellow banana.â The assistant was a little confused but still gave him a yellow banana. The customer then said, â I want a curved banana.â The assistant was even more confused, but he still gave him a curved banana. Finally, the customer said, â I want a talking banana.â The assistant couldnâ t help but laugh and replied, â Sir, we donâ t have bananas that can talk here, thatâ s just a misunderstanding.â The customer also laughed and replied, â I know, I was just joking with you.â | 2309.16609#230 | 2309.16609#232 | 2309.16609 | [
"2305.20050"
] |
2309.16609#232 | Qwen Technical Report | 45 Qwen-14B-Chat (RLHF) æ ä¸ å¤©ï¼ å° æ å »è¶ å¸ ä¹°æ°´æ ã ä» æ é äº ä¸ ç¯®å æ °é² ç è ¹æ å é¦ è ï¼ ç ¶å èµ°å æ ¶é ¶å °ã 986 æ ¶é ¶å å° ç ç å °å° æ æ ä¸ ç æ°´æ ï¼ çª ç ¶å¤§å£°å é ï¼ â å ï¼ ä½ ç æ°´æ å¥½æ °é² å ï¼ ä½ æ ¯ä» å ªé ä¹°ç ï¼ â å° æ æ £äº ä¸ ä¸ ï¼ å¿ æ ³ï¼ â é ¾é æ ä¹°æ°´æ ç æ 巧被å ç °äº ï¼ â äº æ ¯ä» å¾ æ å °å ç ï¼ â æ å å ä» æ å é æ ç ï¼ ä¿ è¯ æ °é² ï¼ â å° ç å ¬äº ï¼ å¿ ä¸ ä½ ç¬ äº èµ·æ ¥ï¼ â ä½ å ¯ç é ï¼ è¿ æ ¯è¶ å¸ é ç æ°´ æ ï¼ ä¸ æ ¯ä½ å®¶æ å é ç ã â å° æ å ¬äº ï¼ é¡¿æ ¶å »ç ¼äº ã ä» å°´å°¬å °è§£é é ï¼ â å ï¼ å¯¹ä¸ èµ·ï¼ æ å¤ªç´§å¼ äº ï¼ ä»¥ä¸ºè¿ æ ¯æ ç æ°´æ ã â å° ç ç¬ ç å® æ °é ï¼ â æ²¡äº æ²¡äº | 2309.16609#231 | 2309.16609#233 | 2309.16609 | [
"2305.20050"
] |
2309.16609#233 | Qwen Technical Report | ï¼ æ ä»¬é ½æ é äº ã ä¸ è¿ ï¼ ä½ ç æ°´æ ç¡®å® å¾ å¥½ï¼ æ ä»¬è¶ å¸ é ç æ°´æ ä¹ é ½æ ¯æ °é² ç ã â å° æ å ¬äº ï¼ æ ¾äº ä¸ å £æ° ï¼ å¿ æ ä¹ å¥½äº èµ·æ ¥ã ä» ä» äº é ±ï¼ æ ç æ°´æ ç¦»å¼ äº è¶ | 2309.16609#232 | 2309.16609#234 | 2309.16609 | [
"2305.20050"
] |
2309.16609#234 | Qwen Technical Report | å¸ ï¼ å¿ é æ ³ç ï¼ â ä¸ æ¬¡å ä¹°æ°´æ ï¼ ä¸ å® è¦ æ ´å å° å¿ ï¼ ä¸ è¦ é ¹å ºç¬ è¯ äº ï¼ â One day, Xiao Ming went to the supermarket to buy fruits. He picked a basket of fresh apples and bananas, and then headed towards the cashier. The cashier, Xiao Wang, saw the fruits in Xiao Mingâ | 2309.16609#233 | 2309.16609#235 | 2309.16609 | [
"2305.20050"
] |
2309.16609#235 | Qwen Technical Report | s hands and suddenly shouted, â Wow, your fruits are so fresh! Where did you buy them from?â Xiao Ming was stunned for a moment and thought to himself, â Could it be that my fruit-buying skills have been discovered?â So he proudly replied, â I just picked them from the orchard, guaranteed fresh!â Upon hearing this, Xiao Wang couldnâ t help but laugh and said, â Youâ re really funny. These are fruits from the supermarket, not from your own orchard.â | 2309.16609#234 | 2309.16609#236 | 2309.16609 | [
"2305.20050"
] |
2309.16609#236 | Qwen Technical Report | Xiao Ming, upon listening, was immediately dumbfounded. He awkwardly explained, â Oh, Iâ m sorry, I was too nervous, and I thought these were my fruits.â Xiao Wang smiled and comforted him, saying, â Itâ s alright, itâ s alright. We both made a mistake. However, your fruits are indeed very good, and the fruits in our supermarket are also fresh.â Upon hearing this, Xiao Ming breathed a sigh of relief, and his mood improved. He paid the money, carried the fruits, and left the supermarket, thinking, â Next time I buy fruits, I must be more careful and avoid making such a funny mistake!â | 2309.16609#235 | 2309.16609#237 | 2309.16609 | [
"2305.20050"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.