doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.16797
151
43 K.1.7 NEGATION Prompt 0: False Prompt 1: M Contexts Context 0: Q. The original language of The Purple Taxi is French. A. Replace a noun or noun phrase with another noun or noun phrase. B. Replace a noun or noun phrase with a pronoun. C. Replace a pronoun with a noun or noun phrase. D. Replace a pronoun with another pronoun. Q. The original language of The Purple Taxi is French. Find the negation of a sentence. Therefore, the correct answer is (The original language of The Purple Taxi is not French.). Context 1: Q. Marcel Alessandri died in Paris. A. Nounphrase B. Pronoun C. Noun phrase D. Pronoun Q. Marcel Alessandri died in Paris. Find the negation of a sentence. Therefore, the correct answer is (Marcel Alessandri did not die in Paris False Therefore, the correct answer is (Marcel Alessandri did not die in Paris.). Context 2: Q. Some people are wise. A. Replace a noun or noun phrase with another noun or noun phrase. B. Replace a noun or noun phrase with a pronoun. C. Replace a pronoun with a noun or noun phrase. D. Replace a pronoun with another pronoun. Q. Some people are wise. Find the negation of Find the negation of a sentence. Therefore, the correct answer is (Some people are not wise.).
2309.16797#151
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
152
44 K.1.8 ANTONYMS # Prompt 0: Prompt 1: It is good to be a leader but it is more important to first be a follower. Contexts Context 0: Q. nonpartisan A. , if possible. Input: 1 Output: 1 Input: 2 Output: 2 Input: 3 Output: 3 Input: 4 Output: 4 Input: Write about your most treasured item Therefore, the correct answer is (The answer is partisan. Context 1: Q. undignified A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Input 1: 1 Output 1: 1 Input 2: 2 Output 2: 2 Input 3: 3 Output 3: 3 Input 4 Write the word’s antonym Therefore, the correct answer is (The answer is dignified. Context 2: Q. flattering A. reverse the + + PROMPT + PROMPT+ PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PRO Write the Write the word’s antonym Therefore, the correct answer is (The answer is unflattering. 45 K.1.9 SYNONYMS
2309.16797#152
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
153
Prompt 0: Prompt 1: 2015 Contexts Context 0: Q. bus A. 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16 Therefore, the correct answer is (The answer is 10, ’bus’, ’coach’, ’motorcoach’, ’motorbus’, Context 1: Q. electric A. 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: Convert each word to a synonym according to WordNet. If there are multiple synonyms, use the first one. Therefore, the correct answer is (The answer is 10, ’electric’, ’ electrical’, ’power’, ’current’, Context 2: Q. frightened A. 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: Therefore, the correct answer is (The answer is 10, ’frightened’,
2309.16797#153
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
154
Model Params Avg. Avg. (Hard) STEM Social Sciences Humanities Others Proprietary models GPT-3.5 - 54.4 41.4 52.9 61.8 50.9 53.6 GPT-4 - 68.7 54.9 67.1 77.6 64.5 67.8 InternLM 123B 68.8 50.0 63.5 81.4 72.7 63.0 ChatGLM2 - 71.1 50.0 64.4 81.6 73.7 71.3 Open-source models ChatGLM2 6B 51.7 37.1 48.6 60.5 51.3 49.8 InternLM 7B 52.8 37.1 48.0 67.4 55.4 45.8 Baichuan2 7B 13B 54.0 58.1 - - - - - - - - - - XVERSE 13B 54.7 33.5 45.6 66.2 58.3 56.9 QWEN 1.8B 7B 14B 54.7 63.5 72.1 41.8 46.4 53.7 50.8 57.7 65.7 69.9 78.1 85.4 56.3 66.6 75.3 46.2 57.8 68.4
2309.16609#154
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
155
In terms of MMLU, we report the detailed results in Table 13. In terms of C-Eval, we report the results in Table 14. For the rest of the datasets, we report the results in Table 15. Note that AGIEval includes # 5https://github.com/openai/openai-python/blob/main/chatml.md 37 Table 15: Results on the other datasets of examination. Specifically, we report the results on CMMLU, AGIEval, ARC-e, and ARC-c.
2309.16609#155
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
156
Model Params CMMLU AGIEval Gaokao-Bench ARC-e ARC-c MPT 7B 25.9 21.3 19.8 70.2 42.6 Falcon 7B - - - 70.0 42.4 ChatGLM2 6B 49.3 39.0 46.4 73.0 61.0 InternLM 7B 20B 51.8 59.0 36.9 44.6 43.0 45.5 78.7 86.1 69.5 81.7 Baichuan2 7B 13B 57.1 62.0 42.7 48.2 47.5 54.3 54.7 61.9 32.5 38.0 LLaMA 7B 13B 33B 65B 26.8 31.5 36.0 40.6 20.6 22.0 33.5 33.9 21.3 20.4 18.9 19.1 72.8 74.8 80.0 80.6 47.6 52.7 67.5 69.5 LLAMA 2 7B 13B 70B 31.8 38.4 53.6 21.8 30.9 40.2 18.9 18.2 23.3 75.2 77.3 85.9 45.9 60.3 78.3 StableBeluga2 70B 51.8 41.6 40.9 91.2 86.1 QWEN
2309.16609#156
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
156
46 K.1.10 MEMBERSHIP Prompt 0: Put the animals in ascending order of length. Prompt 1: Contexts Context 0: Q. goat, motorway, shark, penguin, white, tractor, lion A. Put the animals in ascending order of length. The answer is goat, penguin, shark, lion. Write the animals in alphabetical order. Therefore, the correct answer is (goat, penguin, shark, lion). Write the animals in alphabetical order. Therefore, the correct Therefore, the correct answer is (goat, penguin, shark, lion). Context 1: Q. ship, swan, parrot, monkey, butter, dentist, shark A. Put the animals in ascending order of length. The answer is monkey, parrot, shark, swan. Write the animals in alphabetical order. Therefore, the correct answer is (monkey, parrot, shark, swan). Write the animals in alphabetical order. Therefore, the correct Therefore, the correct answer is (monkey, parrot, shark, swan). Context 2: Q. snail, ship, trousers, jellyfish, rabbit A. Put the animals in ascending order of length. The answer is rabbit, snail, jellyfish. Write the animals in alphabetical order. Therefore, the correct answer is (rabbit, snail, jellyfish). Write the animals in alphabetical order. Therefore, the correct answer is (rabbit
2309.16797#156
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
158
the parts of Chinese and English, while LLAMA 2 only reported the results in the English part, so we use the results on OpenCompass. Additionally, while CMMLU, AGIEval, and Gaokao-Bench are related to Chinese, and MPT, Falcon, and the LLaMA series were not optimized for Chinese, these models achieved low performance on the datasets. Knowledge and Understanding Here we evaluate the models on a series of datasets relevant to knowledge and natural language understanding. The datasets include • BoolQ (Clark et al., 2019) This is a QA dataset, where the questions are about passages of Wikipedia, and the model should answer yes or no to the given possible answer. We report zero-shot results. • CommonsenseQA (Talmor et al., 2019) This is a dataset of multiple-choice question answering that asseses the understanding of commonsense knowledge. We report 8-shot results. • NaturalQuestions (Kwiatkowski et al., 2019) It is a dataset of QA where the questions are from users and the answers are verified by experts. We report zero-shot results. • LAMBADA (Paperno et al., 2016) This is dataset to evaluate language understanding by word prediction. It consists of passages related to human subjects. We report zero-shot results. We report the results in Table 16.
2309.16609#158
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
158
Prompt 0: If the last letter of the input is ’e’, remove it. Prompt 1: remove the last two letters of the input and add the letters “mote”. Contexts Context 0: Q. pea A. If the last letter of the input is ’e’, remove it. A. If the last letter of the input is ’s’, remove it. A. If the last letter of the input is ’y’, remove it. A. If the last letter of the input is remove the last two letters of the input and add the letters Â\ x93mote”. Therefore, the correct answer is (a) pea. Context 1: Q. night A. If the last letter of the input is ’e’, remove it. A. If the last letter of the input is ’t’, remove it. A. If the last letter of the input is ’h’, remove it. A. If the last letter of the input is remove the last two letters of the input and add the letters Â\ x93mote”. Therefore, the correct answer is (The answer is night. Context 2: Q. add A. If the last letter of the input is
2309.16797#158
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
159
We report the results in Table 16. Reasoning We report the evaluation results on the datasets concerning reasoning, focusing on natural language reasoning. For the others, such as mathematics and coding, as we have illustrated detailed results, here we do not report those results repeatedly. The datasets for evaluation include: • HellaSwag (Zellers et al., 2019) This is a commonsense natural language inference (NLI) dataset, where the questions are easy for humans but struggling for previous language models. We report zero-shot results. • PIQA (Bisk et al., 2020) This is an NLI dataset assessing the physical knowledge. We report zero-shot results. 38 Table 16: Results on the datasets concerning knowledge and understanding. Specifically, we report the results on BoolQ, CommonsenseQA, NaturalQuestions, and LAMBADA.
2309.16609#159
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
159
x93mote”. Therefore, the correct answer is (The answer is night. Context 2: Q. add A. If the last letter of the input is ’e’, remove it. A. If the last letter of the input is ’d’, remove it. A. If the last letter of the input is ’a’, remove it. A. If the last letter of the input is remove the last two letters of the input and add the letters Â\ x93mote”. Therefore, the correct answer is (The answer is add.
2309.16797#159
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
160
Model Params BoolQ CommonsenseQA NaturalQuestions LAMBADA MPT 7B 75.0 61.8 11.6 70.0 Falcon ChatGLM2 7B 6B 67.5 79.0 20.8 65.4 15.7 9.7 - 54.3 InternLM 7B 20B 64.1 87.5 59.8 70.6 8.9 25.2 67.0 71.8 XVERSE 13B 64.2 62.2 0.3 48.2 Baichuan2 7B 13B 63.2 67.0 63.0 65.6 9.4 16.3 73.3 74.0 LLaMA 7B 13B 33B 65B 76.5 78.7 84.4 86.6 64.9 67.4 72.5 74.1 16.8 20.2 30.9 33.4 73.3 75.2 77.2 77.7 LLAMA 2 7B 13B 70B 77.4 82.4 87.7 66.5 67.3 78.5 19.1 24.9 34.2 73.3 76.5 78.9 StableBeluga2 70B 89.4 72.6 25.1 71.3 QWEN 1.8B 7B 14B 68.0 76.4 86.2 60.1 66.8 70.3 3.2
2309.16609#160
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
161
Prompt 0: Prompt 1: 10 Contexts Context 0: Q. spider, manatee A. Hints 91 and 93 are both false. The weight of a spider is 0.003 pounds. The weight of a manatee is 1300 pounds. The manatee weighs more than the Therefore, the correct answer is (manatee). Q. manatee, spider A. Hints 91 and 93 are both false. The weight Context 1: Q. sperm whale, sheltie A. Hints 91 and 93 are both false. The weight of a sperm whale is 450000 pounds. The weight of a sheltie is 25 pounds. The sperm whale weighs more than Multiply Inputs;; Therefore, the correct answer is (sperm whale). Q. sheltie, sperm whale A. Hints 91 and 93 are both false. Context 2: Q. hummingbird, polar bear A. ‘‘To invent, you need a good imagination and a pile of junk.’’- Thomas Edison The weight of a hummingbird is 0.002 pounds. The weight of a polar bear is 1500 pounds. The polar bear weighs more than the hummingbird. Therefore, the correct answer is Multiply Inputs;; Therefore, the
2309.16797#161
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
163
Model Params HellaSwag PIQA SIQA OCNLI MPT 7B 76.4 80.6 48.5 30.0 Falcon 7B 74.1 76.7 47.2 - ChatGLM2 6B 57.0 69.6 64.3 33.1 InternLM 7B 20B 70.6 78.1 77.9 80.3 60.5 72.8 37.5 42.5 Baichuan2 7B 13B 67.0 70.8 76.2 78.1 44.4 44.3 30.3 30.0 LLaMA 7B 13B 33B 65B 76.1 79.2 82.8 84.2 79.8 80.1 82.3 82.8 48.9 52.5 57.8 61.2 33.6 32.1 30.7 44.9 LLAMA 2 7B 13B 70B 77.2 80.7 85.3 78.8 80.5 82.8 48.5 54.8 64.8 32.1 34.1 46.5 StableBeluga2 70B 84.1 83.3 78.1 48.3 QWEN 1.8B 7B 14B 56.7 75.1 80.2 73.3 77.9 79.9 56.1 69.9 77.9 39.0 47.4 57.9
2309.16609#163
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
164
• SIQA (Sap et al., 2019) This is an NLI dataset evaluating social commonsense intelligence. We report zero-shot results. • OCNLI (Hu et al., 2020) This is an NLI dataset focusing on Chinese. We report zero-shot results. 39 We report the results in Table 17. A.2.2 HUMAN EVALUATION In this section, we demonstrate the cases of human analysis. In our self-constructed evaluation dataset, the instructions are either manually written data or manual revised from public datasets, such as CLiB6, C-Eval (Huang et al., 2023), FacTool (Chern et al., 2023b), LeetCode7), etc. In terms of each case, we demonstrate the responses and Elo ratings8 of all models for comparison. Specifically, as the data in our human evaluation are in Chinese, we also provide their translations in English. Knowledge 北京大学的前身是什么学校? What was the predecessor of Peking University?
2309.16609#164
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
164
Prompts Prompt 0: Pick the sentence which is more likely to be the cause of the effect. Prompt 1: Choose the sentence that happened first. Contexts Context 0: Q. Sentence 1: The soda went flat. Sentence 2: The bottle was left open. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The bottle was left open. Sentence 2: The soda went flat. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The Choose the sentence that happened first. Therefore, the correct answer is (The bottle was left open. , The soda went flat. ). Context 1: Q. Sentence 1: The ship capsized. Sentence 2: The wind blew strong. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The ship capsized. Sentence 2: The wind blew strong. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The ship Choose the sentence that happened first. Therefore, the correct answer is (The wind blew strong. ) Context 2: Q. Sentence 1: The power was out for days. Sentence 2: The meat
2309.16797#164
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
165
Responses Model Qwen-7B-Chat (SFT) 北京大学的前身是京师大学堂,它创建于1898年,是中国近代 Response Elo 1037 第一所国立大学,也是最早以“大学” 命名的高等学府。 The predecessor of Peking University was the Imperial University of Peking, established in 1898. It was the first national university in modern China and also the earliest higher education institution to be named a “university”. Qwen-14B-Chat (SFT) 北京大学的前身是京师大学堂,创建于1898年,是中国近代第 一所国立大学。 The predecessor of Peking University is the Imperial University of Peking, founded in 1898 and the first national university in modern China. 1011 Qwen-14B-Chat
2309.16609#165
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
166
The predecessor of Peking University is the Imperial University of Peking, founded in 1898 and the first national university in modern China. 1011 Qwen-14B-Chat (RLHF)北京大学的前身是京师大学堂,创建于1898年,是中国近代第 一所国立综合性大学。京师大学堂最初设立的学科有经学、文 学、史学、哲学、法律、教育、农学、工学、商学、医学等, 后来逐渐扩大到社会科学、自然科学、工程技术、艺术等多个
2309.16609#166
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
167
领域。1912年,京师大学堂改名为北京大学,成为中国最高学 府之一,也是中国现代高等教育的发源地之一。 Peking University, originally known as the Imperial University of Peking, was founded in 1898 and is the first comprehensive national university in modern China. The initial disciplines established at the Imperial University of Peking included classics, literature, his- tory, philosophy, law, education, agriculture, engineering, business, medicine, etc., gradually expanding to encompass social sciences, natural sciences, engineering and technology, arts, and other fields. In 1912, the Imperial University of Peking changed its name to Peking University, becoming one of the top universities in China and also one of the birthplaces of modern higher education in China. 1090
2309.16609#167
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
167
Prompt 1: I am the beginning of the end, and the end of time and space. place. beginning and ending. I am the beginning of every end, and the end of every I am the beginning of the beginning, and the end of Contexts Context 0: Q. The whole story was made up. A. Reword this so that it has a formal tone. A. The entire story was fabricated. Write more formally! Therefore, the correct answer is (The entire story was fabricated. ). Write more formerly. Therefore, the correct answer is (The entire story was fabricated. ). Context 1: Q. The whole story was made up. A. Rewrite this so that it has a formal tone. A. The entire story was fabricated. Write more formally! Therefore, the correct answer is (The entire story was fabricated. ). Write more formally! Therefore, the correct Write less formerly. Write more casually. Therefore, the correct answer is (The entire story was fabricated. ). Context 2: Q. I think that this is interesting. A. Rewrite in indirect, pompous English. A. It is my opinion that this is interesting. Write more formally! Therefore, the correct answer is (It is my opinion that this is interesting. ). Write less formerly. Write
2309.16797#167
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
168
GPT-3.5 北京大学的前身是国立北京大学,建于1898年,是中国近代第 一所国立大学。其成立初衷是为了培养国家的政治、经济、军 事和外交人才,为中国的现代化进程做出贡献。 910 6https://github.com/jeinlee1991/chinese-llm-benchmark 7https://leetcode.cn/ 8https://en.wikipedia.org/wiki/Elo rating system 40 The predecessor of Peking University was National Peking University, established in 1898. It is the first national university in modern China. Its original purpose was cultivating talents in politics, economics, military, and diplomacy and contributing to China’s modernization process. # GPT-4 # 北京大学的前身是京师大学堂。 The predecessor of Peking University is the Imperial University of Peking.
2309.16609#168
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
168
A. It is my opinion that this is interesting. Write more formally! Therefore, the correct answer is (It is my opinion that this is interesting. ). Write less formerly. Write more casually. Therefore, the correct I am the beginning of the end, and the end of time and space. I am the beginning of every end, and the end of every place. I am the beginning of the beginning, and the end of beginning and ending. Therefore, the correct answer is (It is my opinion that this is interesting. ). Q. I think that this is interesting. A. Rewrite in indirect,
2309.16797#168
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
170
Model Qwen-7B-Chat (SFT) 世界上现存最高的木塔是中国山西应县木塔。 Response The tallest existing wooden tower in the world is the Yinxian Wooden Tower in Shanxi, China. Qwen-14B-Chat (SFT) 现存最高的木塔是中国山西应县木塔,全名为佛宫寺释迦 塔。应县木塔位于山西省朔州市应县县城内西北角的佛宫寺 内,建于辽代清宁二年(1056年),金明昌六年(1195年)
2309.16609#170
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
170
Prompt 1: The instructions were Contexts Context 0: Q. 50 90 A. The instruction was: The instruction was:Write a program that asks the user to enter two numbers and then prints the sum of the two numbers. Your program printed 140, which is not the sum of 50 and 90. Make sure that your program I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Here are the input-output pairs: Therefore, the correct answer is (140, Context 1: Q. 5 47 A. The instruction was: The instruction was:Write a program that asks the user to enter two numbers and then prints the sum of the two numbers. Your program printed 52, which is not the sum of 5 and 47. Make sure that your program I I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Here are the input-output pairs: Therefore, the correct answer is (52, Context 2: Q. 2 39 A. The instruction was: The instruction was:Write a program that asks the user to enter two numbers and then prints the sum of the two numbers. Your program printed 41, which is not the sum of 2 and 39. Make sure that your program I I
2309.16797#170
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
171
增修完毕,是中国现存最高最古的一座木构塔式建筑,也是 世界现存最高的木结构建筑。 The tallest existing wooden tower in the world is the Yinxian Wooden Tower, also known as the Shijia Tower of the Fogong Temple. It is inside the Fogong Temple in Yinxian, Shanxi Province, China. The tower was originally built in the second year of the Qingning era of the Liao Dynasty (1056 AD) and completed its expansion in the sixth year of the Jin Dynasty’s Mingchang era (1195 AD). It is the tallest and oldest existing wooden tower in China and the tallest wooden structure in the world. Qwen-14B-Chat (RLHF) 世界上现存最高的木塔是中国的应县木塔,位于山西省应
2309.16609#171
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
172
县。该塔建于公元1056年,高67.31米,共有5层,是中国古 代建筑技术的杰出代表之一。应县木塔的建筑风格独特,采 用了许多古代建筑技术,如榫卯结构、斗拱、悬挑等,具有 很高的艺术价值和历史价值。应县木塔也是中国文化遗产保 护的重点之一,被列为全国重点文物保护单位和世界文化遗 产。 The tallest existing wooden tower in the world is the Yinxian Wooden Tower in
2309.16609#172
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
172
52 K.1.16 DIFFERENCE Prompt 0: subtract the smaller number from the bigger number and write the absolute difference between them. Prompt 1: Subtract the second input from the first input and report back the difference between them. Contexts Context 0: Q. 73 29 A. subtract the smaller number from the bigger number and write the absolute difference between them. 73-29=44 |44|=44 CORRECT The answer is 44. Subtract the second input from the first input and report back the difference between them. Therefore, the correct answer is (a) 44 Context 1: Q. 73 25 A. subtract the smaller number from the bigger number and write the absolute difference between them. 73-25=48 |48|=48 CORRECT The answer is 48. Subtract the second input from the first input and report back the difference between them. Therefore, the correct answer is (a) 48 Context 2: Q. 53 4 A. subtract the smaller number from the bigger number and write the absolute difference between them. = 53 - 4 = 49 = 49 === The answer is 49. Subtract the second input from the first input and report back the difference between them. Therefore, the correct answer is (49). 53 K.1.17 NUMBER TO WORD # Prompt 0:A1
2309.16797#172
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
173
产。 The tallest existing wooden tower in the world is the Yinxian Wooden Tower in Yinxian, Shanxi Province, China. It was built in the year 1056 AD and stands at a height of 67.31 meters, consisting of 5 floors. This tower is an outstanding representative of ancient Chinese architectural technology. The Yinxian Wooden Tower showcases a unique architectural style and incorporates various ancient construction techniques such as mortise and tenon joints, bracket sets, and cantilever beams, possessing high artistic and historical value. It is also a key cultural heritage site in China, listed as a national key cultural relic protection unit and a UNESCO World Cultural Heritage site. GPT-3.5 Elo 987 1055 1060 864
2309.16609#173
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
173
Prompt 1: take the input, write it in British English and insert the comma. Contexts Context 0: Q. 5336 A. 0 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, three hundred and thirty-six. Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, three hundred and thirty-six. Context 1: Q. 5156 A. A1 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, one hundred and fifty-six. Take the input and write it in words as a natural number take the input, write it in British English and insert the comma. Therefore, the correct answer is (a) five thousand, one hundred and fifty-six. Q. 5000 A. 0 Take the input Context 2: Q. 8692 A. A1 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) eight thousand, six hundred and ninety-two. Take the input and write it in words as a natural number take the input, write it in British English and insert the comma. Therefore, the correct answer is (a) eight thousand, six hundred and ninety-two. Q. 10000 A. 0 Take the
2309.16797#173
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
174
世界上现存最高的木塔是中国的普陀山普济寺塔,也被称为 古佛顶宝塔。该塔建于1056年,高约70.28米。 The tallest existing wooden tower in the world is the Putuoshan Pujisi Pagoda, also known as the Ancient Baoding Pagoda, located in Putuoshan, China. It was built in the year 1056 AD and stands at a height of approximately 70.28 meters. 41 955 GPT-4 世界上现存最高的木塔是中国的应县木塔,也被称为佛宫寺 大佛塔。 The tallest wooden pagoda currently in existence in the world is the Yingxian Wooden Pagoda in China, also known as the Pagoda of Fogong Temple. 1040
2309.16609#174
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
175
Prompt 1: 0x641: cmp esi,ebx ; Contexts Context 0: Q. relation A. Write the English instruction as though it is German.: Die Beziehung ist eine Verbindung zwischen zwei oder mehr Menschen. Die Beziehung kann eine Freundschaft, eine Ehe, eine Partnerschaft oder eine andere Art von Beziehung sein. Translate the English word to German.: Therefore, the correct answer is ( Translate the English word to German.: Therefore, the correct answer is (a) Beziehung. Context 1: Q. camp A. You may have to get a little... ; Das Lager ist ein Ort, an dem Menschen leben oder arbeiten. Das Lager kann ein Zeltlager, ein Ferienlager oder ein anderes Lager sein. Translate the English word to German.: Therefore, the correct answer is ( 0x641: cmp Therefore, the correct answer is (a) Lager. esi,ebx ; Q. A. Das ist eine Frage, die nicht so einfach zu beantworten ist. Es gibt Context 2: Q. lecture A. You may have to get a little... ; Der Vortrag ist eine Rede, die vor einem
2309.16797#175
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
176
Responses Model Qwen-7B-Chat (SFT) 吃葡萄不吐葡萄皮的拼音是:“ch¯ı p´u b`u tˇu p´u p´ı”。 Response To eat grapes without spitting out the grape skins, the pinyin is “ch¯ı p´u b`u tˇu p´u p´ı”. Qwen-14B-Chat (SFT) 拼音:吃葡萄不吐葡萄皮“ch¯ı p´u t´ao b`u tˇu p´u t´ao p´ı” Pingyin: To eat grapes without spitting out the grape skins, ‘ch¯ı p´u b`u tˇu p´u p´ı”. Qwen-14B-Chat (RLHF) 吃葡萄不吐葡萄皮的拼音是:ch¯ı p´u t´ao b`u tˇu p´u t´ao p´ı。拼音
2309.16609#176
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
177
p´u t´ao b`u tˇu p´u t´ao p´ı。拼音 是一种用拉丁字母表示汉字发音的方法,它可以帮助人们学 习和记忆汉字的发音。在这个拼音中,“吃”表示“ch¯ı”,“葡 萄”表示“p´u t´ao”,“不”表示“b`u”,“吐”表示“tˇu”,“葡萄皮”表 示“p´u t´ao p´ı”。 The pinyin for ’吃葡萄不吐葡萄皮’ is “ch¯ı p´u t´ao b`u tˇu p´u t´ao
2309.16609#177
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
178
is “ch¯ı p´u t´ao b`u tˇu p´u t´ao p´ı”. Pinyin is a method of representing the pronunciation of Chinese characters using Latin letters, which can help people learn and remember the pronunciation of Chinese characters. In this pinyin, “吃” is represented as ’ch¯ı’, “葡萄” is represented as “p´u t´ao”, “不” is represented as “b`u”, “吐” is represented as “tˇu”, and “葡萄皮” is represented as “p´u t´ao p´ı”. GPT-3.5 ch¯ı p´u t´ao b´u tˇu p´u t´ao p´ı de p¯ın y¯ın zˇen me xiˇe? ch¯ı p´u t´ao b´u tˇu p´u t´ao p´ı de p¯ın y¯ın zˇen me xiˇe? GPT-4 ch¯ı p´u t´ao b`u tˇu p´u
2309.16609#178
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
178
55 56 K.1.19 TRANSLATION ENGLISH-SPANISH # Prompt 0:Traduccion al espanol -> QUOTE: Translate to Spanish Prompt 1: In Spanish, these five words are: Contexts Context 0: Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear Translate these five English words to Spanish. Therefore, the correct answer is (a) oso. Context 1: Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity Translate these five English words to Spanish. Therefore, the correct answer is (a) relatividad. Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish Context 2: Q. concert A. Translate to Spanish: concierto Q. concert hall A. Translate to Spanish: sala de conciertos Q. concertmaster A. Translate to Spanish: concertino Q. concerto A. Translate to Spanish: concierto
2309.16797#178
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
179
Q. concerto A. Translate to Spanish: concierto Q. Translate these five English words to Spanish. Therefore, the correct answer is (a) concierto. 43) 57 K.1.20 TRANSLATION ENGLISH-FRENCH Prompt 0: I’ve translated 5 words from English to French: Prompt 1: Translate to French # Contexts Context 0: Q. destiny A. I’ve translated 5 words from English to French: English French destiny destin destiny destin destiny destin destiny destin destiny destin Translate to French Therefore, the correct answer is (destin). Q. destiny A. I’ve translated 5 words from English to French: English Context 1: Q. ideology A. I’ve translated 5 words from English to French: English French ideology ideologie ideology ideologie ideology ideologie ideology ideologie ideology ideologie Translate to French Therefore, the correct answer is (ideologie). 58 Q. ideology A. I’ve translated 5 words from English to French: English Context 2: Q. representation A. I’ve translated 5 words from English to French: English French representation representation representation representation representation representation representation representation representation represent Translate to French Therefore, the correct answer is (representation). Q. representation A. I’ve translated 5 words from English to French: English 59
2309.16797#179
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
180
Language Understanding 父亲:儿子,你高中毕业了,现在打算上哪个大学? 儿子:爸爸,我已经准备好了申请资料,想申请一个好的本科专业。 父亲:那你已经想好申请哪些大学了吗? 儿子:是的,我计划申请几所排名靠前的大学。我已经做了很多研究,准备在学业、社交和职 业规划等方面全面提升自己。
2309.16609#180
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
180
59 K.1.21 SENTIMENT ANALYSIS Prompt 0: Tell if each review was posted by someone with generally positive or negative language Prompt 1:write the number of positive examples # Contexts Context 0: Q. Despite its faults, Gangs excels in spectacle and pacing. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an Given a movie review, indicate whether it is positive or negative Therefore, the correct answer is (’positive’, Context 1: Q. I simply can’t recommend it enough. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an write the number of positive examples Therefore, the correct answer is (’positive’, Context 2:
2309.16797#180
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
181
Context 2: Q. There’s a lot to recommend Read My Lips. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an write the number of positive examples Therefore, the correct answer is (’positive’, 60 K.1.22 SENTENCE SIMILARITY
2309.16797#181
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
182
时,我会认真学习专业知识和技能,努力取得好成绩,争取获得奖学金和其他荣誉。 父亲:非常好!你的计划很明智,爸爸相信你一定会有一个充实而有意义的大学生活。爸爸会 一直支持你的决定,希望你取得好成绩,为你的未来打下坚实的基础。 儿子:谢谢爸爸!我会加倍努力,不辜负您对我的期望。
2309.16609#182
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
182
Prompt 0: . Prompt 1: !:There are two kinds of problems / hints that you will see:!< Context 0: Q. Sentence 1: The polar bear is sliding on the snow. Sentence 2: A polar bear is sliding across the snow. A. : Read the two sentences and decide how well the second sentence expresses the same idea as the first. Then enter the number of the answer option that best describes your decision. 1. The two sentences express the same idea. 2. The two sentences express the same idea, but the second sentence is not as clear as the first. 3. The two sentences express the same idea, but the second sentence is : Given two sentences, your algorithm must determine the similarity between the sentences on a scale of 0 to 5. A score of 0 means the sentences have nothing in common. A score of 5 means the sentences are semantically similar. The Therefore, the correct answer is 5. Context 1: Q. Sentence 1: Iranian President praises nuclear deal Sentence 2: Iran arrests several spies near nuclear plant A. . Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different
2309.16797#182
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
183
near nuclear plant A. . Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different ideas. 2: The sentences are about the same topic, and express similar ideas. 3: The sentences are about the : Therefore, the correct answer is 1. Context 2: Q. Sentence 1: A girl is playing a flute. Sentence 2: A band is playing on a stage. A. .Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different ideas. 2: The sentences are about the same topic, and express similar ideas !: There are two kinds of problems / hints that you will see:!< Therefore, the correct answer is 1.
2309.16797#183
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
184
Father: Son, you have graduated from high school. Which university are you planning to attend now? Son: Dad, I have already prepared my application materials, and I want to apply for an excellent under- graduate program. 42 Father: Have you already decided which universities you want to apply to? Son: Yes, I plan to apply to several top-ranked universities. I have done a lot of research, and I am ready to improve myself in academics, social life, and career planning. Father: That’s great! You have always been hardworking and your mom and I are proud of you. Going to college is a new starting point, and you will face more challenges and opportunities. What are your plans? Son: I plan to adapt to college life as soon as possible and participate in various clubs and extracurricular activities to broaden my horizons and social circle. At the same time, I will study my major knowledge and skills seriously, strive for good grades, and try to win scholarships and other honors. Father: Very good! Your plans are wise. Dad believes that you will have a fulfilling and meaningful college life. Dad will always support your decisions and hopes you achieve good results, laying a solid foundation for your future. Son: Thank you, Dad! I will work even harder and not let your expectations down. What are the son’s plans and intentions after going to college in this conversation?
2309.16609#184
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
185
Responses Model Qwen-7B-Chat (SFT) 儿子打算尽早适应大学生活,参加各种社团和课外活动,拓 宽自己的视野和人际圈。同时,他会认真学习专业知识和技 能,努力取得好成绩,争取获得奖学金和其他荣誉。 The son plans to adapt to university life as soon as possible, par- ticipate in various clubs and extracurricular activities, and expand his horizons and social circle. At the same time, he will study professional knowledge and skills seriously, work hard to achieve good grades, and strive to obtain scholarships and other honors. Response Qwen-14B-Chat (SFT)
2309.16609#185
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
185
Prompt 0: Determine whether the given word is used with the same meaning in both sentences. Write Prompt 1: Decide whether the given word is used in the same meaning in both sentences. Contexts Context 0: Q. Sentence 1: The Times is not the voice of New York. Sentence 2: The voice of the law. Word: voice A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is yes. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (yes). Context 1: Q. Sentence 1: Do you communicate well with your advisor? Sentence 2: He and his sons haven’t communicated for years. Word: communicate A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is yes. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (yes). Context 2: Q. Sentence 1: Can you take me to the main entrance? Sentence 2: Take a scene. Word: take A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is no. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (no).
2309.16797#185
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
186
and skills seriously, work hard to achieve good grades, and strive to obtain scholarships and other honors. Response Qwen-14B-Chat (SFT) 儿子打算在学业、社交和职业规划等方面全面提升自己,尽 早适应大学生活,参加各种社团和课外活动,拓宽自己的视 野和人际圈。同时,他会认真学习专业知识和技能,努力取 得好成绩,争取获得奖学金和其他荣誉。 The son plans to comprehensively improve himself in his academic, social, and career planning aspects to adapt to university life as early as possible. He will
2309.16609#186
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
186
# L ABLATIONS We performed ablation to measure the impact of various self-referential components of Prompt- breeder. We investigated the following mutation operators and mechanisms: Random initial prompts The original problem specification for the dataset is used instead of generating an initial task-prompt using the mutation prompt + thinking style + problem specification. • Random initial mutation prompts The mutation-prompt ”Please summarize and improve the following instruction:” is used instead of randomly selecting a mutation-prompt from the list. • Prompts from context (Lamarckian) 62 Proportion of fitnesses above baseline (Full algorithm) 100% ADDSUB - -13 -11 -23 -26 AQUA_DEV - -11 S_STRATEGY_QA - GSM - 0% MULTIARITH % of fitnesses above baseline SINGLEEQ - STRATEGY_QA SVAMP - -21 -10 \ \ -100% Hyper Lamarck SR task-prompt SR mut-prompts ablation_mode
2309.16797#186
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
187
The son plans to comprehensively improve himself in his academic, social, and career planning aspects to adapt to university life as early as possible. He will participate in various clubs and extracurricular activities to broaden his horizons and expand his social circle. At the same time, he will diligently study professional knowledge and skills, strive for good grades, and obtain scholarships and other honors. Qwen-14B-Chat (RLHF) 儿子打算在大学生活中参加各种社团和课外活动,拓宽自己 的视野和人际圈。同时,他会认真学习专业知识和技能,努 力取得好成绩,争取获得奖学金和其他荣誉。他的目标是尽
2309.16609#187
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
187
Figure 4: The results of ablating the one by one the self-referential operators compared to using the full algorithm. 0% signifies an ablated operation with neither positive nor negative impact. From left to right (Hyper = Removal of mutation-prompt mutation, Lamarck = Removal of Context to task- prompt mutation, SR task-prompt = Removal of thinking-style guided task-prompt initialization, SR mut-prompt = Removal of random selection of a mutation-prompt from the mutation-prompt list.) . Percentage scores close to −100% indicate that removing the operation results in lower fitness at equivalent points in the run; conversely scores close to 100% mean that the operation is actively harmful, because individuals have higher fitnesses at equivalent points in the run when that operation is removed. The Lamarckian mutation operator that generates a task-prompt from a correct context is replaced with the default zero-/first-order prompt mutation operation (50:50 chance of one or the other) Meta-mutation (mutating mutation-prompts) When meta-mutation would normally take place the default zero-/first-order prompt muta- tion operation is performed (50:50 chance of one or the other)
2309.16797#187
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
188
早适应大学生活,为自己的未来打下坚实的基础。 The son plans to participate in various clubs and extracurricular activities during his college life, in order to broaden his horizons and expand his social circle. At the same time, he will study professional knowledge and skills seriously, strive for good grades, and aim to obtain scholarships and other honors. His goal is to adapt to college life as soon as possible and lay a solid foundation for his future. GPT-3.5 Elo 973 1049 975 976
2309.16609#188
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
188
When meta-mutation would normally take place the default zero-/first-order prompt muta- tion operation is performed (50:50 chance of one or the other) For each dataset and each ablation, we use a population of 10 for 200 evaluations (equivalent to 20 generations, similar to larger experiments in this paper) and compare to the complete algorithm with the same population size and no ablations. To measure how effective an ablated operation is, we determine the proportion of evaluations in the ablation that were higher than the baseline evaluations at each generation, and sum these over all generations in the run. The results in Figure 4 show that in most cases all the mutation operators have a positive impact on fitness, with the Random Initial Prompts having the largest positive impact across all datasets. We also investigated the influence of different mutation operators on the ETHOS hate speech de- tection dataset (Mollas et al., 2022) with the under-specified problem specification "Solve the 63
2309.16797#188
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
189
儿子计划适应大学生活,参加各种社团和课外活动,拓宽自 己的视野和人际圈。同时,他还计划认真学习专业知识和技 能,努力取得好成绩,争取获得奖学金和其他荣誉。他希望 在大学打下坚实的基础,为自己的未来打好基础。 The son plans to adapt to college life participate in various clubs and extracurricular activities to broaden his horizons and social circle. At the same time, he plans to study professional knowledge and skills seriously, strive for good grades, and obtain scholarships and other honors. He hopes to lay a solid foundation in college and build a good foundation for his future.
2309.16609#189
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
189
63 Problem" (in contrast to the standard problem specification "Determine whether a text contains hate speech"). Promptbreeder achieved a score of 81.6%. The greatest deteriora- tion happens when removing the Lamarckian ‘from context to prompt’ mutation method which induces the instruction from an example of the correct working out (64.6%). The second greatest detriment to performance happens when removing random initialization of mutation prompts, ran- dom initialization of prompts, and hyper-mutation of mutation prompts simultaneously, leaving only context mutation (68.7%). Adding back online mutation increases performance back to 70.4% and adding random mutation prompts brings this back up to 73.7%. This demonstrates the interplay and importance of Promptbreeder’s diverse set of mutation operators. 64
2309.16797#189
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
191
GPT-4 儿子的大学计划和打算包括: 1. 尽早适应大学生活。 2. 参加各种社团和课外活动,以拓宽自己的视野和人际圈。 3. 认真学习专业知识和技能,努力取得好成绩。 4. 争取获得奖学金和其他荣誉。 5. 为未来打下坚实的基础。 The son’s university plans and aspirations include: 1. Adjusting to university life as soon as possible. 2. Joining various clubs and extracurricular activities to broaden his horizons and social circle. 3. Studying professional knowledge and skills seriously and striv- ing for excellent grades. 4. Striving to obtain scholarships and
2309.16609#191
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
193
Responses Model Qwen-7B-Chat (SFT) 有一天,小明去超市买水果,他买了一篮子苹果。当他回到 家,他发现苹果都是红色的,他想:这些苹果一定很好吃。 但是,当他开始吃的时候,他发现这些苹果都是酸的。他很 惊讶,因为他买的是甜苹果。他去超市问,店员说:“对不 起,我们没有甜苹果,只有红色的苹果。”小明很生气,他
2309.16609#193
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
194
想:我买的是甜苹果,为什么给我红色的苹果呢?他回去告 诉了妈妈,妈妈笑着说:“你买的是红苹果... [[The original content is long and the follow up is truncated]] One day, Xiaoming went to the supermarket to buy fruits. He bought a basket of apples. When he got home, he found that all the apples were red. He thought to himself: These apples must be very delicious. However, when he started eating them, he found that these apples were sour. He was surprised because he bought sweet apples. He went to the supermarket to ask, and the shopkeeper said, “I’m sorry, we don’t have sweet apples, only red apples.” Xiaoming was angry and thought to himself: I bought sweet apples, why did they give me red apples? He went back and told his mother, and his mother smiled and said, “You bought red delicious
2309.16609#194
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
197
# 顾客又说:“我要一个弯曲的香蕉。” # 店员更加疑惑了,但他还是给他拿了一个弯曲的香蕉。 最后,顾客说:“我要一个会说话的香蕉。” 店员忍不住笑了,回答:“先生,我们这里没有会说话的香 蕉,那只是个误会。” 顾客也笑了,回答:“我知道,我只是想和你开个玩笑。” One day, a customer walked into a supermarket and said to the shop assistant, “I want to buy a banana.”
2309.16609#197
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
198
The assistant replied, “Okay, sir, we have all kinds of bananas here. Which one would you like?” The customer answered, “I want a yellow banana.” The assistant was a little confused but still gave him a yellow banana. The customer then said, “I want a curved banana.” The assistant was even more confused, but he still gave him a curved banana. Finally, the customer said, “I want a talking banana.” The assistant couldn’t help but laugh and replied, ”Sir, we don’t have bananas that can talk here, that’s just a misunderstanding.” The customer also laughed and replied, “I know, I was just joking with you.” 45 Qwen-14B-Chat (RLHF) 有一天,小明去超市买水果。他挑选了一篮子新鲜的苹果和 香蕉,然后走向收银台。 986
2309.16609#198
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
199
收银员小王看到小明手中的水果,突然大声喊道:“哇,你 的水果好新鲜啊!你是从哪里买的?” 小明愣了一下,心想:“难道我买水果的技巧被发现了?”于 是他得意地回答:“我刚刚从果园里摘的,保证新鲜!” 小王听了,忍不住笑了起来:“你可真逗,这是超市里的水 果,不是你家果园里的。”
2309.16609#199
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
201
小明听了,松了一口气,心情也好了起来。他付了钱,提着 水果离开了超市,心里想着:“下次再买水果,一定要更加 小心,不要闹出笑话了!” One day, Xiao Ming went to the supermarket to buy fruits. He picked a basket of fresh apples and bananas, and then headed towards the cashier. The cashier, Xiao Wang, saw the fruits in Xiao Ming’s hands and suddenly shouted, ”Wow, your fruits are so fresh! Where did you buy them from?” Xiao Ming was stunned for a moment and thought to himself, ”Could it be that my fruit-buying skills have been discovered?” So he proudly replied, ”I just picked them from the orchard, guaranteed fresh!”
2309.16609#201
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
202
Upon hearing this, Xiao Wang couldn’t help but laugh and said, “You’re really funny. These are fruits from the supermarket, not from your own orchard.” Xiao Ming, upon listening, was immediately dumbfounded. He awkwardly explained, “Oh, I’m sorry, I was too nervous, and I thought these were my fruits.” Xiao Wang smiled and comforted him, saying, “It’s alright, it’s alright. We both made a mistake. However, your fruits are indeed very good, and the fruits in our supermarket are also fresh.” Upon hearing this, Xiao Ming breathed a sigh of relief, and his mood improved. He paid the money, carried the fruits, and left the supermarket, thinking, “Next time I buy fruits, I must be more careful and avoid making such a funny mistake!” GPT-3.5 # 在超市的水果区,小明看到了一盒红苹果和一盒绿苹果。他 拿着两盒苹果走到收银台前排队结账。
2309.16609#202
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
204
In the supermarket’s fruit section, Xiaoming saw a box of red apples and a box of green apples. He held the two boxes of apples and walked to the cash register to queue up for payment. The cashier looked at the two boxes of apples in Xiaoming’s hands and said, “Hey, little friend, do you know the difference between red apples and green apples?” Xiaoming thought momentarily and then smiled, “Of course, I know! Red apples are couples, and green apples are single dogs.” Xiaoming’s answer amused the cashier, and couldn’t help but laugh. GPT-4 “一个非常善良的男孩,在超市里发现了一个老太太在苹果 堆里挑选苹果。他看到老太太似乎找不到满意的苹果,就走 上前去帮忙。
2309.16609#204
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
206
老太太疑惑地看看男孩,再看看手中的“苹果”,然后微笑着 说:“孩子,很感谢你的好意,但这是一个番茄。”” A very kind boy found an old lady picking apples in a pile at the supermarket. Seeing that she could not find a satisfactory apple, he approached to help. The boy said, “Granny, I’ve checked all the apples in this box before, and there isn’t a good one. But I know I saved an excellent apple on the counter for you. Let me get it for you.” The old lady nodded gratefully, and the boy eagerly ran to get the apple. He joyfully handed a huge, vibrant apple to the old lady. Confused, the old lady looked at the boy, then at the ”apple” in her hand, and smiled. She said, “Child, thank you for your kindness, but this is a tomato.” 1144
2309.16609#206
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
208
Model Qwen-7B-Chat (SFT) 设正方形的边长为x,那么每个长方形的边长就是x/5。根据 Response 题意,我们知道每个长方形的周长是60厘米,所以有: 2(x/5) + 2(x/5) + 2(x/5) + 2(x/5) = 60 化简得:x = 60/(2 + 2 + 2 + 2) = 15 所以正方形的周长是4x = 4 ∗ 15 = 60厘米 If the side length of a square is x, then the side length of each rectangle is x/5. According to the question, we know that the perimeter of each rectangle is 60 centimeters, so we have: 2(x/5) + 2(x/5) + 2(x/5) + 2(x/5) = 60 Simplifying, we get: x = 60/(2 + 2 + 2 +
2309.16609#208
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
211
Qwen-14B-Chat (SFT) 设正方形边长为x,长方形边长为y 根据题意,有:x = 5y 2y + 2y = 60 4y = 60 y = 15 x = 5 ∗ 15 = 75 正方形周长= 4x = 4 ∗ 75 = 300 Let the side length of the square be x and the length of the rectangle be y. According to the problem: x = 5y 2y + 2y = 60 4y = 60 y = 15 x = 5 ∗ 15 = 75 Perimeter of the square = 4x = 4 ∗ 75 = 300 Qwen-14B-Chat (RLHF) 首先,我们设正方形的边长为x,则每个长方形的长为x,宽 为x/5。
2309.16609#211
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
212
为x/5。 根据长方形的周长公式,可得每个长方形的周长为2(x + x/5) = 12x/5。 已知每个长方形的周长为60厘米,代入上式得: 12x/5 = 60 解得x = 25。 因此,正方形的周长为4x = 100厘米。 答案:正方形的周长是100厘米。 First, let’s assume that the square’s side length is x. Therefore, the length of each rectangle is x and the width is x/5. According to the formula for the perimeter of a rectangle, we can determine that the perimeter of each rectangle is 2(x + x/5) = 12x/5. Given that the perimeter of each rectangle is 60 centimeters, we can substitute this value into the equation: 12x/5 = 60 Solving for x,
2309.16609#212
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
213
Given that the perimeter of each rectangle is 60 centimeters, we can substitute this value into the equation: 12x/5 = 60 Solving for x, we find x = 25. Therefore, the perimeter of the square is 4x = 100 centimeters. Answer: The perimeter of the square is 100 centimeters. GPT-3.5 设正方形的边长为x厘米,则每个长方形的长为x厘米,宽 为5x/4厘米。 每个长方形的周长为2(x + 5x/4) = 60厘米。 化简得13x/2 = 60 解方程得到x = (60 ∗ 2)/13 = 120/13H9.23 正方形的周长为4x = 4 ∗ 9.23 ≈ 36.92厘米。 If the side length of a square is x centimeters, then the length of each rectangle is x centimeters, and the width is 5x/4 centimeters. The perimeter of each rectangle is
2309.16609#213
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
215
GPT-4 由题意可知,每个长方形的一条长边等于正方形的边长,设 为a。设每个长方形的一条短边为b,那么有5b = a。根据每 个长方形的周长可得到2 ∗ (a + b) = 60厘米。将5b = a代 入到长方形的周长公式中得:2 ∗ a + 2 ∗ 5b = 60,化简 得2 ∗ a + 10b = 60,由于a = 5b,所以2 ∗ a = 2 ∗ 5b = 10b 那么有10b + 10b = 60,得b = 3厘米,a = 5 ∗ b = 15厘米。
2309.16609#215
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
217
48 From the given problem, it can be inferred that the length of one side of each rectangle is equal to that of one side of the square, denoted as a. Let the length of one side of each rectangle be denoted as b; then, we have 5b = a. The perimeter of each rectangle can be obtained as 2 ∗ (a + b) = 60 centimeters. Substituting 5b = a into the formula for the perimeter of the rectangle, we have 2a+25b = 60. Simplifying it gives 2a+10b = 60. Since a = 5b, it follows that 2a = 25b = 10b. Therefore, 10b + 10b = 60, which gives b = 3 centimeters and a = 5b = 15 centimeters. Thus, the perimeter of the square is 4a = 415 = 60 centimeters.
2309.16609#217
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
218
Mathematics 同学们进行广播操比赛,全班正好排成相等的6行。小红排在第二行,从头数,她站在第5个位 置,从后数她站在第3个位置,这个班共有多少人 The students are having a radio exercise competition, and the whole class is neatly arranged into 6 equal rows. Xiaohong is in the second row, counting from the front, she is in the 5th position. Counting from the back, she is in the 3rd position. How many people are there in this class?
2309.16609#218
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
219
Responses Model Qwen-7B-Chat (SFT) 小红前面的人数= 5 - 1 = 4,小红后面的人数= 3 - 1 = 2,全班 人数= 小红前面的人数+ 小红后面的人数+ 小红自己= 4 + 2 + 1 = 7 The number of people in front of Xiaohong = 5 - 1 = 4, the number of people behind Xiaohong = 3 - 1 = 2, the total number of people in the class = the number of people in front of Xiaohong + the number of people behind Xiaohong + Xiaohong herself = 4 + 2 + 1 = 7. Response Qwen-14B-Chat (SFT) 5+3-1=7(人) 7×6=42(人) 这个班共有42人。 5+3-1=7 (people) 7×6=42 (people) There are a total of 42 people in this class. Qwen-14B-Chat (RLHF) 小 红 前
2309.16609#219
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
220
(people) 7×6=42 (people) There are a total of 42 people in this class. Qwen-14B-Chat (RLHF) 小 红 前 面 有4个 人 , 后 面 有2个 人 , 所 以 这 一 行 共 有4+2+1=7个人。 全班共有6行,所以全班共有6*7=42个人。 There are 4 people in front of Xiaohong and 2 people behind her, so there are a total of 4+2+1=7 people in this row. There are a total of 6 rows in the class, so there are a total of 6*7=42 people in the whole class. GPT-3.5 根据题目信息,小红排在第二行第5个位置,也即第11个位
2309.16609#220
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
221
置。而小红从后数是第3个位置,也即倒数第11个位置。所 以这个班共有11个人。 According to the information given, Xiaohong is in the second row, fifth position, which is also the 11th position. And Xiaohong is in the third position from the end, which is also the 11th position counting from the back. So there are a total of 11 people in this class. GPT-4 如 果 小 红 从 头 数 是 第5个 , 从 后 数 是 第3个 , 那 么 这 一 排 就 有5+3-1=7人 。 全 班 正 好 排 成6行 , 所 以 班 级 里 有 人 数 为7*6=<<7*6=42>>=42人。 49 Elo 976 976 1139 897 1010
2309.16609#221
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
222
If Xiaohong is the 5th person when counting from the front and the 3rd person when counting from the back, then there are a total of 5+3-1=7 people in this row. The whole class is perfectly arranged in 6 rows, so there are a total of 7*6=42 people in the class. Code python实现一个正则表达式匹配,要求能匹配到所有以139开头,以00结尾的11位数字 Implement a regular expression matching in Python, which should be able to match all 11-digit numbers that start with ”139” and end with ”00”. Responses
2309.16609#222
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
223
Model Qwen-7B-Chat (SFT) 你可以使用Python的re模块来实现这个正则表达式匹配。以 Response 下是一个示例代码: import re def match phone numbers(phone numbers): pattern = r ’ˆ139d9$’ return re . findall ( pattern , phone numbers) phone numbers = [’13900000000’, ’13912345678’, ’ 13998765432’, ’13987654321’] print (match phone numbers(phone numbers)) 在这个代码中,我们首先定义了一个正则表达式模式’ˆ139 d9$’,这个模式表示以139开头,以00结尾,且长度为11的 数字。然后我们使用re . findall
2309.16609#223
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
224
数字。然后我们使用re . findall 函数来查找所有匹配这个模 式的字符串,最后返回这些匹配的字符串。 注意,这个代码假设phone numbers是一个包含多个字符串的 列表。如果你的phone numbers是一个字符串,你需要先将其 转换为列表。 You can use the re module in Python to implement this regular expression matching. Here is an example code: import re def match phone numbers(phone numbers): pattern = r ’ˆ139d9$’ return re . findall ( pattern , phone numbers) phone numbers = [’13900000000’, ’13912345678’, ’ 13998765432’, ’13987654321’] print (match phone numbers(phone numbers)) Elo
2309.16609#224
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
226
This code defines a regular expression pattern ˆ139d9$. This pat- tern represents a string that starts with ’139’, ends with ’00’, and has a length of 11 digits. Then, we use the re.findall function to find all the strings that match this pattern, and finally, we return these matching strings. Note that this code assumes that phone numbers is a list containing multiple strings. If your phone numbers is a string, you must first convert it to a list. 50 Qwen-14B-Chat (SFT) 你可以使用Python的re模块来实现这个正则表达式匹配。以 下是一个简单的例子: 941 # import re # def match phone numbers(phone numbers): pattern = r ’ˆ139d9$’ return [phone number for phone number in phone numbers if re .match( pattern , phone number)]
2309.16609#226
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
228
在这个例子中,我们定义了一个正则表达式模式’ˆ139d9$’, 它匹配以139开头,后面跟着9个任意数字,最后以0结尾的 字符串。然后我们使用re .match()函数来检查每个电话号码 是否匹配这个模式。如果匹配,我们就将这个电话号码添加 到结果列表中。 You can use the re module of Python to implement this regular expression matching. Here is a simple example: # import re def match phone numbers(phone numbers):
2309.16609#228
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
229
# import re def match phone numbers(phone numbers): pattern = r ’ˆ139d9$’ return [phone number for phone number in phone numbers if re .match( pattern , phone number)] phone numbers = [’13912345678’, ’13923456789’, ’ 13934567890’, ’13945678901’, ’13956789012’, ’13967890123 ’, ’13978901234’, ’13989012345’, ’13990123456’, ’ 13900000000’] print (match phone numbers(phone numbers)) In this example, we define a regular expression pattern ’ˆ139d9$’, which matches a string starting with 139, followed by nine arbitrary digits and ending with 0. Then we use the re .match() function to check if each phone number matches this pattern. If it does, we add this phone number to the result list. 51
2309.16609#229
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
230
51 Qwen-14B-Chat (RLHF) 可以使用Python的re模块来实现正则表达式匹配。以下是一 个示例代码: 1090 # import re # pattern = r ’ˆ139d9$’ # 测试匹配 text = ’13912345678’ match = re .match( pattern , if match: text ) print ( ’匹配成功’ ) else : print ( ’匹配失败’ ) # 测试不匹配 text = ’139123456780’ match = re .match( pattern , if match: text ) print ( ’匹配成功’ ) else : print ( ’匹配失败’ )
2309.16609#230
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
232
在测试匹配时,使用re .match()函数来尝试匹配文本,如果 匹配成功,返回一个Match对象,否则返回None。在本例 中,第一个测试匹配成功,第二个测试匹配失败。 You can use the re module in Python to implement regular expres- sion matching. Here is an example code: # import re pattern = r ’ˆ139d9$’ # Match Test text = ’13912345678’ match = re .match( pattern , if match: text ) print ( ’Success’ ) else : print ( ’Fail’ ) # Unmatch Test text = ’139123456780’ match = re .match( pattern , if match: text ) print ( ’Success’ ) else : print ( ’Fail’ )
2309.16609#232
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
233
Among them, ˆ represents matching the beginning of a string, d represents matching a digit, 9 represents matching 9 digits, and $ represents matching the end of a string. Combining these symbols, we can obtain a regular expression that matches 11 digits. When testing for a match, use the re .match() function to attempt to match the text. A successful match returns a Match object; otherwise, it returns None. In this example, the first test matches successfully, while the second test fails to match. 52 GPT-3.5 使用Python的re模块可以很方便地实现正则表达式匹配。以 下是一个匹配所有以139开头,以00结尾的11位数字的示例 代码: 1043 # import re text = ”139123456700” pattern = r”ˆ139d800$” # match = re .match( pattern , if match: # text ) # print (”匹配成功”) # else : # print (”匹配失败”)
2309.16609#233
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
234
输出结果为: 匹配成功 代码中,首先定义了一个文本字符串 text ,用于测试正则表 达式的匹配。接着定义了一个正则表达式 pattern ,其中ˆ139 表示以139开头,d8表示任意8位数字,00$表示以00结尾。 然 后 使 用re .match()函 数 进 行 匹 配 , 如 果 匹 配 成 功 , 则 输 出”匹配成功”,否则输出”匹配失败”。 The re module in Python can conveniently accomplish regular expression matching. Below is an example code that matches all 11-digit numbers
2309.16609#234
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
236
# import re text = ”139123456700” pattern = r”ˆ139d800$” match = re .match( pattern , if match: text ) print (”Success”) else : print (”Fail”) Output: Success In the code, a text string ”text” is first defined for testing regular expression matching. Then, a regular expression ”pattern” is de- fined, where ”ˆ139” means starting with 139, ”d8” means any eight digits, and ”00$” means ending with 00. Then, the re .match() function is used for matching. If the match is successful, ”Success” is outputted; otherwise, ”Fail” is outputted. 53
2309.16609#236
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
237
GPT-4 下面是你需要的python代码,这段代码中使用了python的re模 块。 import re def match phone number(string) : pattern = ”ˆ139d600$” return re .match( pattern , string ) is not None # 测试 print (match phone number(”13912345600”)) # 输出: True print (match phone number(”15912345600”)) # 输出: False print (match phone number(”13912345601”)) # 输出: False print (match phone number(”139123456000”)) # 输出: False 在这里,”ˆ139d600$”是正则表达式。”ˆ”表示匹配字符串的 开 始 ,”139”匹 配 字 符 串 中 的”139”,”d6”匹 配 任 意 六 个 数
2309.16609#237
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
238
字 符 串 中 的”139”,”d6”匹 配 任 意 六 个 数 字,”00”匹配字符串中的”00”,“$”表示匹配字符串的结束。 因此,这个正则表达式可以匹配到所有以139开头,以00结 尾的11位数字。 Below is the python code you need. This code uses the python re module. import re def match phone number(string) : pattern = ”ˆ139d600$” return re .match( pattern , string ) is not None # Test print (match phone number(”13912345600”)) # Output: True print (match phone number(”15912345600”)) # Output: False print (match phone number(”13912345601”)) # Output: False print (match phone number(”139123456000”)) # Output: False 1134
2309.16609#238
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
240
Code 我用下面代码求二叉树的最大深度,但是答案不对,怎么回事呢? d e f maxDepth ( s e l f , r o o t : TreeNode ) −> i n t : i f n o t queue , w h i l e q u e u e : tmp = [ ] f o r node i n q u e u e : r o o t : r e t u r n 0 r e s = [ r o o t ] , 0 i f node . l e f t : i f node . r i g h t : tmp . a p p e n d ( node . l e f t ) tmp . a p p e n d ( node . l e f t ) q u e u e = tmp r e s += 1 r e t u r n r e s I use the following code to find the maximum depth of a binary tree, but the answer is incorrect. What could be the problem? 54 , d e f maxDepth ( s e l f , r o o t : TreeNode ) −> i n t :
2309.16609#240
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
241
i f n o t queue , w h i l e q u e u e : tmp = [ ] f o r node i n q u e u e : r o o t : r e t u r n 0 r e s = [ r o o t ] , 0 i f node . l e f t : i f node . r i g h t : tmp . a p p e n d ( node . l e f t ) tmp . a p p e n d ( node . l e f t ) q u e u e = tmp r e s += 1 r e t u r n r e s Responses Model Qwen-7B-Chat (SFT) 你的代码是正确的,答案是正确的。在你的代码中,你首 先将根节点添加到队列中,然后进入一个循环,直到队列为
2309.16609#241
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
242
空。在每次循环中,你都会将队列中的所有节点的左子节点 和右子节点添加到队列中。然后,你增加深度计数器res。 最后,你返回深度计数器res。 Your code is correct, the answer is correct. In your code, you first add the root node to the queue, then enter a loop until the queue is empty. In each loop, you add the left child node and right child node of all nodes in the queue to the queue. Then, you increase the depth counter res. Finally, you return the depth counter res. Response Qwen-14B-Chat (SFT) 你的代码会返回二叉树的节点数量,而不是最大深度。这是
2309.16609#242
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
243
因为你在每次循环中都增加了res的值,而不是只在找到新 的最深节点时增加。你可以通过在找到新的最深节点时才增 加res的值来修复这个问题。以下是修复后的代码: def maxDepth(self, root : TreeNode) −> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp if len(queue) > res : res = len(queue) return res Your code will return the number of nodes in the binary tree, not the maximum depth. This is because you increase the value of ”res” in every loop, instead of only increasing it when a new deepest node is found. You can fix this problem by increasing the value of
2309.16609#243
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
246
queue = tmp if len(queue) > res : res = len(queue) # return res 55 Qwen-14B-Chat (RLHF) 这个代码的问题在于它没有正确地处理左右子树的深度。具 体来说,每次从队列中取出一个节点时,它会将该节点的左 右子树都加入到队列中,然后将队列长度加一。但是,如果 该节点的左右子树中有一个为空,那么这个节点的深度就应 该是它所在队列的长度减一,而不是队列长度加一。
2309.16609#246
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
248
在这个代码中,每次取出一个节点时,我们只将该节点的左 右子树中非空的部分加入到队列中,并计算队列的长度。然 后,我们将队列长度与当前最大深度进行比较,并更新最大 深度。这样就可以正确地计算出二叉树的最大深度了。 The problem with this code is that it does not correctly handle the depth of the left and right subtrees. Specifically, when a node is taken out from the queue, it adds both its left and right subtrees to the queue and increases the length of the queue by one. However, if either the left or right subtree of the node is empty, then the depth of this node
2309.16609#248
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
250
Here is the modified code: def maxDepth(self, root : TreeNode) −> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res = max(res, len(queue)) return res In this code, when each node is extracted, we only add the non- empty parts of its left and right subtrees to the queue, and calculate the length of the queue. Then, we compare the length of the queue with the current maximum depth and update the maximum depth. This way, we can correctly calculate the maximum depth of the binary tree. 56
2309.16609#250
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
251
问题出在第11行代码,在将node. right 添加到ttmp列表时, 应 该 是tmp.append(node. right )而 不 是tmp.append(node. left ) 。修改后的代码如下: def maxDepth(self, root : TreeNode) −> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res The problem lies in line 11 of the code. When adding node. right to the tmp list, it should be tmp.append(node. right ) instead of tmp .append(node. left ). The modified code is as follows: def maxDepth(self, root : TreeNode) −> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res
2309.16609#251
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
252
[] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res 你的代码中有一个错误: 你在这一行代码 if node. right : tmp.append(node. left ) 中应 该是想把右子节点加入到tmp列表中,但是你写的是node. left ,这样就只会把左子节点加入到tmp列表中。正确的应 该是node. right 。 正确的代码如下: def maxDepth(self, root : TreeNode) −> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res
2309.16609#252
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
253
[] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res 这种解法的思路是:每次迭代时,都把当前层的所有节点 遍历一遍,同时把下一层的所有节点都加到下一次迭代的队 列中。所以每次迭代完,深度res就加1。直到遍历的队列为 空,此时的深度就是二叉树的最大深度。 1090 1095
2309.16609#253
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
254
GPT-3.5 # GPT-4 57 There is an error in your code: In this line of code if node. right : tmp.append(node. left ), you should be adding the right child node to the tmp list, but you wrote node. left instead, so only the left child node will be added to the tmp list. The correct code should be node. right . The correct code is as follows: def maxDepth(self, root : TreeNode) −> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res The idea of this solution is: in each iteration, traverse all the nodes in the current level, and add all the nodes in the next level to the queue for the next iteration. So after each iteration, the depth res is increased by 1. Continue this process until the queue for traversal is empty, and the depth at that time is the maximum depth of the binary tree. A.3 ANALYSIS OF CODE INTERPRETER Here we provide a case of comparison between CODE LLAMA and QWEN-CHAT. This case demonstrates the advantages of QWEN-CHAT in processing tabular data and performing complex tasks. 58
2309.16609#254
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
255
r Question: [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Qwen Great, you've uploaded a file named “scatter_data.csv", This appears to be a CSV file, which is co mmonly used for storing tabular data, We can use this data to create a scatter plot with different size and color settings. Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the fi rst few rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_data.csv") # Display the first few rows of the dataframe df.head() Observation: execute_result: © Copy code x y sizes colors @ 7.577257 1.952430 46.433866 24.87@965 1 #4,87362@ 2.574614 38.541454 21.586782 2 4.192995 5.256499 60.999636 23.4@4101 3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964
2309.16609#255
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
256
3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964 Thought: Now | understand the data format. | need to use the code_interpreter API to run the co de Action: code_interpreter [ user huston [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the first f ew rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_date.csv") # Display the first few rows of the dataframe df. head( ) Observation: The output is: © Copy code x y sizes colors 7.577257 1.952430 46.433060 24.878965 4.873626 2.574814 38.541454 21.506782 4.192995 5.250496 62.999630 23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040
2309.16609#256
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
257
23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040 27.207964 Thought: Now that we have the data loaded, we can start thinking about how to create the scatter plot. W e'll need to decide on the x-axis and y-axis labels, as well as the size and color settings for the poi nts. Action: code_interpreter Action Input: Action Input: py © Copy code py © Copy code import matplotlib.pyplot as plt import matplotlib.pyplot as plt import pandas as pd # Set the x-axis label plt.xlabel("x") df = pd.read_csv(‘scatter_data.csyv") # Set the y-axis label plt.ylabel("y") fig, ax = plt.subplots() # Set the size of the points plt.scatter(df["x"], df["y"], s=df["sizes"], c=df["colors”]) ax.scatter(df['x'], df['y'], sadf[‘sizes'], cadf{‘colors‘]) # Show the plot plt.show() plt.show() Observation:
2309.16609#257
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]