doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.16609
37
We adopted the accuracy on the test dataset as an important but not exclusive evaluation metric for the reward model. In Table 4, we report the test pairwise accuracy of PMP and reward models on diverse human preference benchmark datasets (Bai et al., 2022b; Stiennon et al., 2020; Ethayarajh et al., 2022; Lightman et al., 2023). Specifically, QWEN Helpful-base and QWEN Helpful-online are our proprietary datasets. The responses in QWEN Helpful-base are generated from QWEN without RLHF, whereas QWEN Helpful-online includes responses from QWEN with RLHF. The results show that the PMP model demonstrates high generalization capabilities on out-of-distribution data, and the reward model demonstrates significant improvement on our QWEN reward datasets. 3.2.2 REINFORCEMENT LEARNING Our Proximal Policy Optimization (PPO) process involves four models: the policy model, value model, reference model, and reward model. Before starting the PPO procedure, we pause the policy model’s updates and focus solely on updating the value model for 50 steps. This approach ensures that the value model can adapt to different reward models effectively.
2309.16609#37
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
37
Working Out to Task-Prompt: This is a ‘Lamarckian’ mutation operator similar to instruction induction in APE. We give an LLM a previously generated working out that led to a correct answer via the following prompt: "I gave a friend an instruction and some advice. Here are the correct examples of his workings out + <<correct working out>> + The instruction was:". This is effectively reverse-engineering the task-prompt from a given working out. An effective example of this is shown in Appendix H. This kind of operator is critical when the problem description is absent, insufficient, or misleading. 3.2.5 PROMPT CROSSOVER AND CONTEXT SHUFFLING Our last class of mutation operators are crossover operators and operators for shuffling the few-shot context examples present in the units of evolution. Prompt Crossover: After a mutation operator is applied, with 10% chance a task-prompt is replaced with a randomly chosen task-prompt from another member of the population. This member is chosen according to fitness proportionate selection. Crossover is not applied to mutation-prompts, only to the task-prompts.
2309.16797#37
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
38
During the PPO operation, we use a strategy of sampling two responses for each query simultaneously. This strategy has proven to be more effective based on our internal benchmarking evaluations. We set the KL divergence coefficient to 0.04 and normalize the reward based on the running mean. The policy and value models have learning rates of 1 × 10−6 and 5 × 10−6, respectively. To enhance training stability, we utilize value loss clipping with a clip value of 0.15. For inference, the policy top-p is set to 0.9. Our findings indicate that although the entropy is slightly lower than when top-p is set to 1.0, there is a faster increase in reward, ultimately resulting in consistently higher evaluation rewards under similar conditions. Additionally, we have implemented a pretrained gradient to mitigate the alignment tax. Empirical findings indicate that, with this specific reward model, the KL penalty is adequately robust to counteract the alignment tax in benchmarks that are not strictly code or math in nature, such as those that test common sense knowledge and reading comprehension. It is imperative to utilize a significantly larger volume of the pretrained data in comparison to the PPO data to ensure the effectiveness of the pretrained gradient. Additionally, our empirical study suggests that an overly large value for this coefficient can considerably impede the alignment to the reward model, eventually compromising the ultimate alignment, while an overly small value would only have a marginal effect on alignment tax reduction.
2309.16609#38
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
38
Context Shuffling: Promptbreeder can simultaneously evolve the task-prompts, mutation-prompts and the set of correct workings out known as the few-shot context. To achieve the later, we fill up a few-shot context with only workings out that led to correct answers. During evaluation we provide this few shot-context before the task-prompt, providing guidance as to the form of the working out that is desired. If the few-shot context list is full, a single randomly sampled new correct working out replaces an existing working out from the list after fitness evaluation of a unit on a new set of questions. In addition, with a 10% chance we resample the whole context list with probability inverse to the maximum context list length. # 4 EXPERIMENTS
2309.16797#38
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
39
3.3 AUTOMATIC AND HUMAN EVALUATION OF ALIGNED MODELS To showcase the effectiveness of our aligned models, we conduct a comparison with other aligned models on well-established benchmarks, including MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), HumanEval (Chen et al., 2021), and BBH (Suzgun et al., 2022). Besides the widely used few-shot setting, we test our aligned models in the zero-shot setting to demonstrate how well the models follow instructions. The prompt in a zero-shot setting consists of an instruction and a question without any previous examples in the context. The results of the baselines are collected from their official reports and OpenCompass (OpenCompass Team, 2023). The results in Table 5 demonstrate the effectiveness of our aligned models in understanding human instructions and generating appropriate responses. QWEN-14B-Chat outperforms all other models 11 Table 5: Performance of aligned models on widely-used benchmarks. We report both zero-shot and few-shot performance of the models.
2309.16609#39
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
39
# 4 EXPERIMENTS We used a population size of 50 units, evolved for typically 20-30 generations, where a generation involves forming random pairs of all individuals in the population and competing them against each other. To evaluate Promptbreeder, we use the datasets from state-of-the-art prompt strategies such as Plan-and-Solve, spanning arithmetic reasoning with GSM8K (Cobbe et al., 2021), SVAMP (Pa- tel et al., 2021), MultiArith (Roy & Roth, 2016), AddSub (Hosseini et al., 2014), AQuA-RAT (Ling et al., 2017), and SingleEq (Koncel-Kedziorski et al., 2015), commonsense reasoning with Common- senseQA (CSQA, Talmor et al., 2019) and StrategyQA (SQA, Geva et al., 2021), instruction induc- tion tasks from (Honovich et al., 2023), and hate speech classification on the ETHOS dataset (Mollas et al., 2022). See Appendix I for details. # 5 RESULTS AND DISCUSSION
2309.16797#39
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
40
Model Params MMLU 0-shot / 5-shot C-Eval 0-shot / 5-shot GSM8K 0-shot / 8-shot HumanEval 0-shot BBH 0-shot / 3-shot Proprietary models GPT-3.5 GPT-4 - - - - / 69.1 / 83.0 - - / 52.5 / 69.9 - - / 78.2 / 91.4 73.2 86.6 - - / 70.1 / 86.7 Open-source models ChatGLM2 6B 45.5 / 46.0 50.1 / 52.6 - / 28.8 11.0 - / 32.7 InternLM-Chat 7B - / 51.1 - / 53.6 - / 33.0 14.6 - / 32.5 Baichuan2-Chat 7B 13B - - / 52.9 / 57.3 - - / 55.6 / 56.7 - - / 32.8 / 55.3 13.4 17.7 - - / 35.8 / 49.9 LLAMA 2-CHAT 7B 13B 70B - - - / 46.2 / 54.6 / 63.8 - - - / 31.9 / 36.2 / 44.3 - - - / 26.3 / 37.1 / 59.3 12.2 18.9 32.3
2309.16609#40
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
41
8 by using PaLM 2-L (Anil et al., 2023) as the underlying LLM (PS+PaLM 2-L) on all datasets ex- cept ADDSUB compared to text-davinci-003 results in the original paper. On all other datasets, zero-shot PB accuracy is higher than PS+, with further improvement in the few-shot case when ex- amples of discovered solutions are included with the prompts. In Table 6 in Appendix J, we show the best evolved zero-shot prompts. The best few-shot candidates are shown in Appendix J.5 on- wards. Appendix K shows few-shot results and their controls on the Instruction Induction tasks from the APE paper. To investigate the ability of Promptbreeder to evolve complex domain-specific prompts for a downstream task, we applied it to the ETHOS Hate Speech Classification prob- lem (Mollas et al., 2022). Promptbreeder was able to evolve a prompt strategy consisting of two sequentially applied relatively long prompts (see Appendix J.1) that scored 89% on ETHOS—an improvement over the hand-designed prompt "Determine whether a text contains hate speech" which scores only 80%. This demonstrates that Promptbreeder is capable of intricate domain-adaptation to a task at hand. Appendix B shows a typical evolutionary run and the prompts evolved, showing that unlike iterative APE, fitness continues to increase throughout the run.
2309.16797#41
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
42
except ChatGPT (OpenAI, 2022) and LLAMA 2-CHAT-70B (Touvron et al., 2023b) in all datasets, including MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), HumanEval (Chen et al., 2021), and BBH (Suzgun et al., 2022). In particular, QWEN’s performance in HumanEval, which measures the quality of generated codes, is significantly higher than that of other open-source models. Moreover, QWEN’s performance is consistently better than that of open-source models of similar size, such as LLaMA2 (Touvron et al., 2023b), ChatGLM2 (ChatGLM2 Team, 2023), InternLM (InternLM Team, 2023), and Baichuan2 (Yang et al., 2023). This suggests that our alignment approach, which involves fine-tuning the model on a large dataset of human conversations, has been effective in improving the model’s ability to understand and generate human-like language.
2309.16609#42
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
42
We analysed the best mutation-prompts used during a run for GSM8K. Table 7 in Appendix J.3 shows the best evolved mutation prompts according to their scores (the proportion of times that when the mutation-prompt was applied to a task-prompt in an unit, a better task-prompt was produced). Table 8 in Appendix J.4 shows in descending order, the percentage of times that the different kinds of mutation operators resulted in an improvement when applied to a task-prompt in the population. It demonstrates that all mutation operators are important for Promptbreeder to work, including hyper- mutation operators which lead to self-referential self-improvement. We measured the impact of self-referential operators on all the maths datasets and the ETHOS dataset. Details of the ablation process and its results can be found in Appendix L. Removing any self-referential operator is harmful under nearly all circumstances, the greatest benefit being the initial re-description of task-prompts upon initialization. We only found one mutation operator to be harmful for one specific task: drawing randomly from the set of mutation-prompts upon initialization hurts performance on GSM8K. # 6 CONCLUSION AND FUTURE WORK
2309.16797#42
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
43
Despite this, we have reservations about the ability of traditional benchmark evaluation to accurately measure the performance and potential of chat models trained with alignment techniques in today’s landscape. The results mentioned earlier provide some evidence of our competitive standing, but we believe that it is crucial to develop new evaluation methods specifically tailored to aligned models. We believe that human evaluation is crucial, which is why we have created a carefully curated dataset for this purpose. Our process involved collecting 300 instructions in Chinese that covered a wide range of topics, including knowledge, language understanding, creative writing, coding, and mathematics. To evaluate the performance of different models, we chose the SFT version of QWEN-CHAT-7B and the SFT and RLHF versions of QWEN-CHAT-14B, and added two strong baselines, GPT-3.5 and GPT-44, for comparison. For each instruction, we asked three annotators to rank the model responses by the overall score of helpfulness, informativeness, validity, and other relevant factors. Our dataset and evaluation methodology provides a comprehensive and rigorous assessment of the capabilities of different language models in various domains.
2309.16609#43
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
43
# 6 CONCLUSION AND FUTURE WORK We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automati- cally evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts. Going forward, it could be interesting to use the LLM itself to assess and promote the diversity of generated prompts (see Zhang et al., 2023a), or to use it to determine the fitness of a whole “thought process”, e.g. an N-prompt strategy where prompts are conditionally applied rather than unconditionally applied as in Promptbreeder. For example, a more complex “thought process” is to use PB in self-play mode to evolve pre-prompts for LLM-based policies that compete with each other, i.e., in a competitive Socratic5 dialog.
2309.16797#43
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
44
Figure 4 illustrates the win rates of the various models. For each model, we report the percentage of wins, ties, and losses against GPT-3.5, with the segments of each bar from bottom to top representing these statistics. The experimental results clearly demonstrate that the RLHF model outperforms the SFT models by significant margins, indicating that RLHF can encourage the model to generate responses that are more preferred by humans. In terms of overall performance, we find that the RLHF model significantly outperforms the SFT models, falling behind GPT-4. This indicates the effectiveness of RLHF for aligning to human preference. To provide a more comprehensive understanding of the models’ performance, we include a case study with examples from different models in Appendix A.2.2. Nonetheless, it remains difficult to accurately capture the gap between our 4To obtain the results from the models, we use the OpenAI APIs of GPT-3.5-turbo-0613 and GPT-4-0613. 12 # Winrate (v.s. GPT-3.5) # I Qwen-7B-Chat (SFT) # [NE # Qwen-14B-Chat (SFT) # ME # Qwen-14B-Chat (RLHF) # mmm
2309.16609#44
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
44
PB remains limited compared to the open-endedness of human thought processes. First, the topology of prompting remains fixed (see Figure 2)—we only adapt the prompt content not the prompting al- gorithm itself. One interpretation of thought is that it is a reconfigurable open-ended self-prompting process. If so, how does one develop complex thought strategies? Clearly it is necessary to generate and evaluate them, and whilst a simple evolutionary process provides one framework in which a thought strategy could be evolved, our actual human experience suggests multiple overlapping hi- erarchical selective processes at play. Moreover, in addition to language, human thought involves intonation, imagery, etc., in a multimodal system. We believe PB points to an exciting future where increasingly open-ended self-referential self- improvement systems can directly use language as the substrate for improvement instead of relying on any parameter updates. This is intriguing, as this approach will likely continue to scale with ever larger and more capable LLMs in the future. # 5https://princeton-nlp.github.io/SocraticAI/ 9 # ACKNOWLEDGMENTS
2309.16797#44
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
45
# [NE # Qwen-14B-Chat (SFT) # ME # Qwen-14B-Chat (RLHF) # mmm % ao AANA ® 9 6 2? 2 rt at opt Sor wp > Ao ah Oo aA 9 Ar > we? A> ag? aps LP DF 2 oP a? Ow 2? ar’ a # Average Knowledge Language Understanding Creative Writing # Math # Coding Figure 4: Results of the human evaluation for chat models. We compare Qwen-7B (SFT), Qwen- 14B (SFT), Qwen-14B (RLHF), as well as GPT-4 against GPT-3.5. Each bar segment represents the percentage of wins, ties, and losses, from bottom to top. On average, the RLHF model outperforms the SFT model. The dataset consists of 300 Chinese instructions. models and the proprietary models. As such, a more extensive and rigorous assessment is required for the chat models. 3.4 TOOL USE, CODE INTERPRETER, AND AGENT Table 6: Performance of QWEN on the in-house Chinese benchmark that evaluates its ability to use unseen tools via ReAct prompting.
2309.16609#45
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
45
# 5https://princeton-nlp.github.io/SocraticAI/ 9 # ACKNOWLEDGMENTS We thank Edward Hughes and Tom Schaul for feedback on an early draft of the paper. We also thank Tom Schaul, Chengrun Yang, and Denny Zhou for fruitful discussions, as well as Gavin Buttimore, Simon Green, Keith Anderson, Joss Moore, Ollie Purkiss, John Quan, and Francesco Visin for their support in running some of the experiments. # REFERENCES
2309.16797#45
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
46
Table 6: Performance of QWEN on the in-house Chinese benchmark that evaluates its ability to use unseen tools via ReAct prompting. Model Params Tool Selection (Acc.↑) Tool Input (Rouge-L↑) GPT-4 - 95 90 15.0 GPT-3.5 - 85 88 75.0 QWEN-CHAT 1.8B 7B 14B 92 98 98 89 91 93 19.3 7.3 2.4 The QWEN models, which are designed to be versatile, have the remarkable ability to assist with (semi-)automating daily tasks by leveraging their skills in tool-use and planning. As such, they can serve as agents or copilots to help streamline various tasks. We explore QWEN’s proficiency in the following areas: • Utilizing unseen tools through ReAct prompting (Yao et al., 2022) (see Table 6). • Using a Python code interpreter to enhance math reasoning, data analysis, and more (see Table 7 and Table 8). • Functioning as an agent that accesses Hugging Face’s extensive collection of multimodal models while engaging with humans (see Table 9). 13 # GPT-4
2309.16609#46
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
46
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christo- pher A. Choquette-Choo, Aakanksha Chowdhery, Cl´ement Crepy, Shachi Dave, Mostafa De- hghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia,
2309.16797#46
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
47
• Functioning as an agent that accesses Hugging Face’s extensive collection of multimodal models while engaging with humans (see Table 9). 13 # GPT-4 Table 7: The proportion of code generated by QWEN that is executable on the in-house evaluation benchmark for Code Interpreter. This benchmark examines QWEN’s coding proficiency in math problem solving, data visualization, and general purposes. CODE LLAMA underperforms on visualization tasks because it hallucinates non-existent columns solely based on CSV file names (see Figure 5).
2309.16609#47
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
47
Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Mar- cello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy,
2309.16797#47
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
48
Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yun- han Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. PaLM 2 Technical Report, September 2023.
2309.16797#48
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
49
Table 8: Correctness of the final response on the in-house evaluation benchmark for Code Interpreter. Visualization-Hard tasks involve planning multiple steps, while Visualization-Easy tasks do not. Visualization-All measures both types of tasks. CODE LLAMA excels in performing Visualization- Easy tasks but tends to underperform in Visualization-Hard tasks, due to its inclination to hallucinate non-existent columns based on the name of a CSV file (see Figure 5). Model Params Category GPT-4 - 82.8 66.7 60.8 63.8 GPT-3.5 - 47.3 33.3 55.7 44.2 LLAMA 2-CHAT 7B 13B 3.9 8.3 14.3 8.3 39.2 40.5 26.4 23.9 CODE LLAMA-INSTRUCT 7B 13B 14.3 28.2 26.2 27.4 60.8 62.0 42.9 44.2 InternLM-Chat 7B v1.1 20B 28.5 34.6 4.8 21.4 40.5 45.6 22.1 33.1 QWEN-CHAT 1.8B 7B 14B 14.7 41.9 58.4 3.6 40.5 53.6 20.3 54.4 59.5 11.7 47.2 56.4 14
2309.16609#49
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
50
14 Table 9: Results of QWEN-Chat on the Hugging Face Agent benchmark. Task Model Params Tool Selection ↑ Metric GPT-4 - 100 100 97.4 GPT-3.5 - 95.4 96.3 87.0 Run Mode Starcoder-Base 15B 86.1 87.0 68.9 Starcoder 15B 87.0 88.0 68.9 QWEN-CHAT 1.8B 7B 14B 85.2 87.0 93.5 84.3 87.0 94.4 61.1 71.5 87.0 GPT-4 - 97.9 97.9 98.5 GPT-3.5 - 97.3 96.8 89.6 Chat Mode Starcoder-Base 15B 97.9 97.9 91.1 Starcoder 15B 97.9 97.9 89.6 QWEN-CHAT 1.8B 7B 14B 93.6 94.7 97.9 93.6 94.7 97.9 73.2 85.1 95.5
2309.16609#50
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
51
To enhance QWEN’s capabilities as an agent or copilot, we employ the self-instruct (Wang et al., 2023c) strategy for SFT. Specifically, we utilize the in-context learning capability of QWEN for self-instruction. By providing a few examples, we can prompt QWEN to generate more relevant queries and generate outputs that follow a specific format, such as ReAct (Yao et al., 2022). We then apply rules and involve human annotators to filter out any noisy samples. Afterwards, the samples are incorporated into QWEN’s training data, resulting in an updated version of QWEN that is more dependable for self-instruction. We iterate through this process multiple times until we gather an ample number of samples that possess both exceptional quality and a wide range of diversity. As a result, our final collection consists of around 2000 high-quality samples. During the finetuning process, we mix these high-quality samples with all the other general-purpose SFT samples, rather than introducing an additional training stage. By doing so, we are able to retain essential general-purpose capabilities that are also pertinent for constructing agent applications.
2309.16609#51
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
51
Angelica Chen, David M. Dohan, and David R. So. Evoprompting: Language models for code-level neural architecture search. CoRR, abs/2302.14838, 2023. doi: 10.48550/arXiv.2302.14838. URL https://doi.org/10.48550/arXiv.2302.14838. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks, November 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168. 10
2309.16797#51
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
52
Using Tools via ReAct Prompting We have created and made publicly available a benchmark for evaluating QWEN’s ability to call plugins, tools, functions, or APIs using ReAct Prompting (see Qwen Team, Alibaba Group, 2023b). To ensure fair evaluation, we have excluded any plugins that were included in QWEN’s training set from the evaluation set. The benchmark assesses the model’s accuracy in selecting the correct plugin from a pool of up to five candidates, as well as the plausibility of the parameters passed into the plugin and the frequency of false positives. In this evaluation, a false positive occurs when the model incorrectly invokes a plugin in response to a query, despite not being required to do so.
2309.16609#52
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
52
10 Richard Dawkins. 13 - The evolution of evolvability. In Sanjeev Kumar and Peter J. Bentley (eds.), On Growth, Form and Computers, pp. 239–255. Academic Press, London, January 2003. ISBN 978-0-12-428765-5. doi: 10.1016/B978-012428765-5/50046-3. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171– 4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.
2309.16797#52
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
53
The results presented in Table 6 demonstrate that QWEN consistently achieves higher accuracy in identifying the relevance of a query to the available tools as the model size increases. However, the table also highlights that beyond a certain point, there is little improvement in performance when it comes to selecting the appropriate tool and providing relevant arguments. This suggests that the current preliminary benchmark may be relatively easy and may require further enhancement in future iterations. It is worth noting that GPT-3.5 stands out as an exception, displaying suboptimal performance on this particular benchmark. This could potentially be attributed to the fact that the benchmark primarily focuses on the Chinese language, which may not align well with GPT-3.5’s capabilities. Additionally, we observe that GPT-3.5 tends to attempt to use at least one tool, even if the query cannot be effectively addressed by the provided tools. Using Code Interpreter for Math Reasoning and Data Analysis The Python code interpreter It is is widely regarded as a powerful tool for augmenting the capabilities of an LLM agent. 15 worth investigating whether QWEN can harness the full potential of this interpreter to enhance its performance in diverse domains, such as mathematical reasoning and data analysis. To facilitate this exploration, we have developed and made publicly available a benchmark that is specifically tailored for this purpose (see Qwen Team, Alibaba Group, 2023a).
2309.16609#53
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
53
Alexander Gajewski, Jeff Clune, Kenneth O. Stanley, and Joel Lehman. Evolvability ES: scalable and direct optimization of evolvability. In Anne Auger and Thomas St¨utzle (eds.), Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Czech Republic, July 13-17, 2019, pp. 107–115. ACM, 2019. doi: 10.1145/3321707.3321876. URL https: //doi.org/10.1145/3321707.3321876. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? A question answering benchmark with implicit reasoning strategies. Trans. Assoc. Comput. Linguistics, 9:346–361, 2021. doi: 10.1162/tacl\ a\ 00370. URL https://doi. org/10.1162/tacl_a_00370. Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers, September 2023.
2309.16797#53
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
54
The benchmark encompasses three primary categories of tasks: math problem-solving, data visu- alization, and other general-purpose tasks like file post-processing and web crawling. Within the visualization tasks, we differentiate between two levels of difficulty. The easier level can be achieved by simply writing and executing a single code snippet without the need for advanced planning skills. However, the more challenging level requires strategic planning and executing multiple code snippets in a sequential manner. This is because the subsequent code must be written based on the output of the previous code. For example, an agent may need to examine the structure of a CSV file using one code snippet before proceeding to write and execute additional code to create a plot. Regarding evaluation metrics, we consider both the executability and correctness of the generated code. To elaborate on the correctness metrics, for math problems, we measure accuracy by verifying if the ground truth numerical answer is present in both the code execution result and the final response. When it comes to data visualization, we assess accuracy by utilizing QWEN-VL (Bai et al., 2023), a powerful multimodal language model. QWEN-VL is capable of answering text questions paired with images, and we rely on it to confirm whether the image generated by the code fulfills the user’s request.
2309.16609#54
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
54
Inman Harvey. The microbial genetic algorithm. In Advances in Artificial Life. Darwin Meets von Neumann: 10th European Conference, ECAL 2009, Budapest, Hungary, September 13-16, 2009, Revised Selected Papers, Part II 10, pp. 126–133. Springer, 2011. Mark Hauschild and Martin Pelikan. An introduction and survey of estimation of distribution algo- rithms. Swarm and evolutionary computation, 1(3):111–128, 2011. Or Honovich, Uri Shaham, Samuel R. Bowman, and Omer Levy. Instruction induction: From few examples to natural language task descriptions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 1935–1952. Association for Computational Linguistics, 2023. doi: 10.18653/v1/2023.acl-long. 108. URL https://doi.org/10.18653/v1/2023.acl-long.108.
2309.16797#54
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
55
The results regarding executability and correctness are presented in Table 7 and Table 8, respectively. It is evident that CODE LLAMA generally outperforms LLAMA 2, its generalist counterpart, which is not surprising since this benchmark specifically requires coding skills. However, it is worth noting that specialist models that are optimized for code synthesis do not necessarily outperform generalist models. This is due to the fact that this benchmark encompasses various skills beyond coding, such as abstracting math problems into equations, understanding language-specified constraints, and responding in the specified format such as ReAct. Notably, QWEN-7B-CHAT and QWEN-14B-CHAT surpass all other open-source alternatives of similar scale significantly, despite being generalist models. Serving as a Hugging Face Agent Hugging Face provides a framework called the Hugging Face Agent or Transformers Agent (Hugging Face, 2023), which empowers LLM agents with a curated set of multimodal tools, including speech recognition and image synthesis. This framework allows an LLM agent to interact with humans, interpret natural language commands, and employ the provided tools as needed.
2309.16609#55
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
56
To evaluate QWEN’s effectiveness as a Hugging Face agent, we utilized the evaluation benchmarks offered by Hugging Face. The results are presented in Table 9. The evaluation results reveal that QWEN performs quite well in comparison to other open-source alternatives, only slightly behind the proprietary GPT-4, demonstrating QWEN’s competitive capabilities. # 4 CODE-QWEN: SPECIALIZED MODEL FOR CODING Training on domain-specific data has been shown to be highly effective, particularly in the case of code pretraining and finetuning. A language model that has been reinforced with training on code data can serve as a valuable tool for coding, debugging, and interpretation, among other tasks. In this work, we have developed a series of generalist models using pretraining and alignment techniques. Building on this foundation, we have created domain-specific models for coding by leveraging the base language models of QWEN, including continued pretrained model, CODE-QWEN and supervised finetuned model, CODE-QWEN-CHAT. Both models have 14 billion and 7 billion parameters versions. 4.1 CODE PRETRAINING
2309.16609#56
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
56
Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Anna Rogers, Jordan L. Boyd- Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 8003–8017. Association for Computational Linguistics, 2023. doi: 10.18653/v1/2023.findings-acl.507. URL https://doi.org/10. 18653/v1/2023.findings-acl.507. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. CoRR, abs/2210.11610, 2022. doi: 10.48550/ arXiv.2210.11610. URL https://doi.org/10.48550/arXiv.2210.11610.
2309.16797#56
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
57
4.1 CODE PRETRAINING We believe that relying solely on code data for pretraining can result in a significant loss of the ability to function as a versatile assistant. Unlike previous approaches that focused solely on pretraining on code data (Li et al., 2022; 2023d), we take a different approach (Rozi`ere et al., 2023) by starting with our base models QWEN trained on a combination of text and code data, and then continuing to 16
2309.16609#57
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
57
Kazuki Irie, Imanol Schlag, R´obert Csord´as, and J¨urgen Schmidhuber. A modern self-referential weight matrix that learns to modify itself. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv´ari, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine 11 Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Pro- ceedings of Machine Learning Research, pp. 9660–9677. PMLR, 2022. URL https:// proceedings.mlr.press/v162/irie22b.html. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population based training of neural networks. CoRR, abs/1711.09846, 2017a. URL http://arxiv.org/abs/1711.09846.
2309.16797#57
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
58
16 pretrain on the code data. We continue to pretrain the models on a total of around 90 billion tokens. During the pre-training phase, we initialize the model using the base language models QWEN. Many applications that rely on specialized models for coding may encounter lengthy contextual scenarios, such as tool usage and code interpretation, as mentioned in Section 3.4. To address this issue, we train our models with context lengths of up to 8192. Similar to base model training in Section 2.4, we employ Flash Attention (Dao et al., 2022) in the attention modules, and adopt the standard optimizer AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017), setting β1 = 0.9, β2 = 0.95, and ϵ = 10−8. We set the learning rate as 6.0 × 10−5 for CODE-QWEN-14B and 3.0 × 10−5 for CODE-QWEN-7B, with 3% warm up iterations and no learning rate decays. 4.2 CODE SUPERVISED FINE-TUNING
2309.16609#58
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
58
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017b. URL https://openreview. net/forum?id=SJ6yPD5xg. Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob N. Foerster, Edward Grefenstette, and Tim Rockt¨aschel. Replay-guided adversarial environment design. In Marc’Aurelio Ran- zato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neu- ral Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 1884–1897, 2021a. URL https://proceedings.neurips.cc/paper/2021/hash/ 0e915db6326b6fb6a3c56546980a8c93-Abstract.html.
2309.16797#58
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
59
4.2 CODE SUPERVISED FINE-TUNING After conducting a series of empirical experiments, we have determined that the multi-stage SFT strategy yields the best performance compared to other methods. In the supervised fine-tuning stage, the model CODE-QWEN-CHAT initialized by the code foundation model CODE-QWEN are optimized by the AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017) optimizer (β1 = 0.9, β2 = 0.95, ϵ = 10−8) with a learning rate of 2.0 × 10−6 and 1.0 × 10−5 for the 14B and 7B model respectively. The learning rate increases to the peaking value with the cosine learning rate schedule (3% warm-up steps) and then remains constant. 4.3 EVALUATION
2309.16609#59
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
59
Minqi Jiang, Edward Grefenstette, and Tim Rockt¨aschel. Prioritized level replay. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Re- search, pp. 4940–4950. PMLR, 2021b. URL http://proceedings.mlr.press/v139/ jiang21b.html. Minqi Jiang, Tim Rockt¨aschel, and Edward Grefenstette. General intelligence requires rethinking exploration. CoRR, abs/2211.07819, 2022. doi: 10.48550/arXiv.2211.07819. URL https: //doi.org/10.48550/arXiv.2211.07819. Louis Kirsch and J¨urgen Schmidhuber. Eliminating meta optimization through self-referential meta learning. CoRR, abs/2212.14392, 2022. doi: 10.48550/arXiv.2212.14392. URL https:// doi.org/10.48550/arXiv.2212.14392.
2309.16797#59
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
60
4.3 EVALUATION Our CODE-QWEN models have been compared with both proprietary and open-source language models, as shown in Tables 10 and 11. These tables present the results of our evaluation on the test sets of Humaneval (Chen et al., 2021), MBPP (Austin et al., 2021), and the multi-lingual code generation benchmark HUMANEVALPACK (Muennighoff et al., 2023). The comparison is based on the pass@1 performance of the models on these benchmark datasets. The results of this comparison are clearly demonstrated in Tables 10 and 11. Our analysis reveals that specialized models, specifically CODE-QWEN and CODE-QWEN-CHAT, sig- nificantly outperform previous baselines with similar parameter counts, such as OCTOGEEX (Muen- nighoff et al., 2023), InstructCodeT5+ (Wang et al., 2023d), and CodeGeeX2 (Zheng et al., 2023). In fact, these models even rival the performance of larger models like Starcoder (Li et al., 2023d).
2309.16609#60
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
60
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke In NeurIPS, 2022. http://papers.nips.cc/paper_files/paper/2022/hash/ Iwasawa. URL 8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html. Large language models are zero-shot reasoners. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Du- mas Ang. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585–597, 2015. doi: 10.1162/tacl a 00160. URL https: //aclanthology.org/Q15-1042.
2309.16797#60
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
61
When compared to some of the extremely large-scale closed-source models, CODE-QWEN and CODE- QWEN-CHAT demonstrate clear advantages in terms of pass@1. However, it is important to note that these models fall behind the state-of-the-art methods, such as GPT-4, in general. Nonetheless, with the continued scaling of both model size and data size, we believe that this gap can be narrowed in the near future. It is crucial to emphasize that the evaluations mentioned previously are insufficient for grasping the full extent of the strengths and weaknesses of the models. In our opinion, it is necessary to develop more rigorous tests to enable us to accurately assess our relative performance in comparison to GPT-4. # 5 MATH-QWEN: SPECIALIZED MODEL FOR MATHEMATICS REASONING We have created a mathematics-specialized model series called MATH-QWEN-CHAT, which is built on top of the QWEN pretrained language models. Specifically, we have developed assistant models that are specifically designed to excel in arithmetic and mathematics and are aligned with human behavior. We are releasing two versions of this model series, MATH-QWEN-14B-CHAT and MATH-QWEN-7B-CHAT, which have 14 billion and 7 billion parameters, respectively. 5.1 TRAINING
2309.16609#61
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
61
Joel Lehman and Kenneth O. Stanley. Evolving a diversity of virtual creatures through novelty In Natalio Krasnogor and Pier Luca Lanzi (eds.), 13th Annual search and local competition. Genetic and Evolutionary Computation Conference, GECCO 2011, Proceedings, Dublin, Ireland, July 12-16, 2011, pp. 211–218. ACM, 2011a. doi: 10.1145/2001576.2001606. URL https: //doi.org/10.1145/2001576.2001606. Joel Lehman and Kenneth O. Stanley. Abandoning Objectives: Evolution Through the Search for Novelty Alone. Evolutionary Computation, 19(2):189–223, June 2011b. ISSN 1063-6560. doi: 10.1162/EVCO a 00025. Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, and Kenneth O. Stanley. Evolution through large models. CoRR, abs/2206.08896, 2022. doi: 10.48550/arXiv.2206.08896. URL https://doi.org/10.48550/arXiv.2206.08896. 12
2309.16797#61
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
62
5.1 TRAINING We carry out math SFT on our augmented math instructional dataset for mathematics reasoning, and therefore we obtain the chat model, MATH-QWEN-CHAT, directly. Owing to shorter average lengths of the math SFT data, we use a sequence length of 1024 for faster training. Most user inputs in the math SFT dataset are examination questions, and it is easy for the model to predict the input 17 Table 10: Results of pass@1 (%) on HumanEval and MBPP. Most scores are retrieved from the papers of StarCoder (Li et al., 2023d), CodeT5+ (Wang et al., 2023d), WizardCoder (Luo et al., 2023b) and CODE LLAMA (Rozi`ere et al., 2023).
2309.16609#62
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
62
12 Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pp. 3045– 3059. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.emnlp-main.243. URL https://doi.org/10.18653/v1/2021.emnlp-main.243. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale gen- In Proceedings of the 55th eration: Learning to solve and explain algebraic word problems. Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 158–167, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1015. URL https://aclanthology.org/P17-1015.
2309.16797#62
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
63
Model Params HumanEval MBPP Proprietary models PaLM 540B 26.2 36.8 PaLM-Coder 540B 36.0 47.0 PaLM 2-S - 37.6 50.0 Code-Cushman-001 - 33.5 45.9 Code-Davinci-002 - 47.0 58.1 GPT-3.5 - 73.2 - GPT-4 - 86.6 - Open-source models LLAMA 2 7B 13B 34B 70B 12.2 20.1 22.6 30.5 20.8 27.6 33.8 45.4 CodeGen-Multi 16B 18.3 20.9 CodeGen-Mono 16B 29.3 35.3 CodeGeeX2 6B 35.9 - StarCoder-Prompted 15B 40.8 49.5 CodeT5+ 16B 30.9 - InstructCodeT5+ 16B 35.0 - CODE LLAMA 7B 13B 34B 33.5 36.0 48.8 41.4 47.0 55.0 CODE LLAMA-INSTRUCT 7B 13B 34B 34.8 42.7 41.5 44.4 49.4 57.0 CODE LLAMA-PYTHON 7B 13B 34B 38.4 43.3 53.7 47.6 49.0 56.2
2309.16609#63
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
63
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. Lost in the middle: How language models use long contexts. CoRR, abs/2307.03172, 2023. doi: 10.48550/arXiv.2307.03172. URL https://doi.org/10.48550/arXiv. 2307.03172. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. GPT understands, too. CoRR, abs/2103.10385, 2021. URL https://arxiv.org/abs/2103. 10385.
2309.16797#63
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
64
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically In ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.), Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pp. 8086–8098. Association for Computational Linguis- tics, 2022. doi: 10.18653/v1/2022.acl-long.556. URL https://doi.org/10.18653/v1/ 2022.acl-long.556. Aman Madaan and Amir Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. CoRR, abs/2209.07686, 2022. doi: 10.48550/arXiv.2209.07686. URL https: //doi.org/10.48550/arXiv.2209.07686.
2309.16797#64
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
65
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Ma- jumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refine- ment with self-feedback. CoRR, abs/2303.17651, 2023. doi: 10.48550/arXiv.2303.17651. URL https://doi.org/10.48550/arXiv.2303.17651. Elliot Meyerson, Mark J. Nelson, Herbie Bradley, Arash Moradi, Amy K. Hoover, and Joel Lehman. Language model crossover: Variation through few-shot prompting. CoRR, abs/2302.12170, 2023. doi: 10.48550/arXiv.2302.12170. URL https://doi.org/10.48550/arXiv.2302. 12170.
2309.16797#65
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
66
Model Params Python Programming Language JavaScript Java Go C++ Rust Avg. Proprietary models GPT-4 - 86.6 82.9 81.7 72.6 78.7 67.1 78.3 Open-source models InstructCodeT5+ 16B 37.0 18.9 17.4 9.5 19.8 0.3 17.1 StarChat-β 15B 33.5 31.4 26.7 25.5 26.6 14.0 26.3 StarCoder 15B 33.6 30.8 30.2 17.6 31.6 21.8 27.6 CodeGeeX2 6B 35.9 32.2 30.8 22.5 29.3 18.1 28.1 OCTOGEEX 6B 44.7 33.8 36.9 21.9 32.3 15.7 30.9 OCTOCODER 15B 46.2 39.2 38.2 30.4 35.6 23.4 35.5 WizardCoder 15B 59.8 49.5 36.1 36.4 40.9 20.2 40.5 QWEN-CHAT 7B 14B 37.2 43.9 23.2 38.4 32.9 42.7 20.7 34.1 22.0 24.4 9.1 18.9 24.2 33.7 CODE-QWEN
2309.16609#66
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
66
Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern machines. CoRR, abs/2307.04721, 2023. doi: 10.48550/arXiv.2307.04721. URL https://doi.org/10.48550/arXiv.2307.04721. Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. ETHOS: a multi-label hate speech detection dataset. Complex and Intelligent Systems, 8(6):4663–4678, doi: 10.1007/s40747-021-00608-2. URL https://doi.org/10.1007% jan 2022. 2Fs40747-021-00608-2.
2309.16797#66
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
67
Milad Moradi and Matthias Samwald. Evaluating the robustness of neural language models to input perturbations. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen- tau Yih (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pp. 1558–1570. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021. emnlp-main.117. URL https://doi.org/10.18653/v1/2021.emnlp-main.117. 13 Jean-Baptiste Mouret and Jeff Clune. Illuminating search spaces by mapping elites. CoRR, abs/1504.04909, 2015. URL http://arxiv.org/abs/1504.04909.
2309.16797#67
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
68
Table 12: Results of models on mathematical reasoning. We report the accuracy of QWEN for all benchmarks using greedy decoding. For MATH, we are reporting QWEN’s performances on the test set from Lightman et al. (2023). Model Proprietary models GPT-4 - 92.0 42.5 83.5 74.0 GPT-3.5 - 80.8 34.1 75.1 60.0 Minerva 8B 62B 540B 16.2 52.4 58.8 14.1 27.6 33.6 - - - - - - Open-source models LLaMA-1 RFT 7B 13B 46.5 52.1 5.2 5.1 - - - - WizardMath 7B 13B 70B 54.9 63.9 81.6 10.7 14.0 22.7 - - - - - - GAIRMath-Abel 7B 13B 70B 59.7 66.4 83.6 13.0 17.3 28.3 - - - - - - QWEN-CHAT 7B 14B 50.3 60.1 6.8 18.4 57.4 70.1 51.2 67.0 MATH-QWEN-CHAT 7B 14B 62.5 69.8 17.2 24.2 80.8 85.0 75.4 78.4 19
2309.16609#68
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
68
Maxwell I. Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Au- gustus Odena. Show your work: Scratchpads for intermediate computation with language models. CoRR, abs/2112.00114, 2021. URL https://arxiv.org/abs/2112.00114. Michael ¨Ollinger and G¨unther Knoblich. Psychological research on insight problem solving. In Recasting reality: Wolfgang Pauli’s philosophical ideas and contemporary science, pp. 275–300. Springer, 2009. Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. CoRR, abs/2304.03442, 2023. doi: 10.48550/arXiv.2304.03442. URL https://doi.org/10. 48550/arXiv.2304.03442.
2309.16797#68
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
69
19 format and it is meaningless for the model to predict the input condition and numbers which could be random. Thus, we mask the inputs of the system and user to avoid loss computation on them and find masking them accelerates the convergence during our preliminary experiments. For optimization, we use the AdamW optimizer with the same hyperparameters of SFT except that we use a peak learning rate of 2 × 10−5 and a training step of 50 000. # 5.2 EVALUATION
2309.16609#69
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
69
Jack Parker-Holder, Minqi Jiang, Michael Dennis, Mikayel Samvelyan, Jakob N. Foerster, Ed- ward Grefenstette, and Tim Rockt¨aschel. Evolving curricula with regret-based environment In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv´ari, Gang Niu, design. and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Re- search, pp. 17473–17498. PMLR, 2022. URL https://proceedings.mlr.press/ v162/parker-holder22a.html.
2309.16797#69
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
70
# 5.2 EVALUATION We evaluate models on the test sets of GSM8K (Grade school math) (Cobbe et al., 2021), MATH (Challenging competition math problems) (Hendrycks et al., 2021), Math401 (Arithmetic abil- ity) (Yuan et al., 2023b), and Math23K (Chinese grade school math) (Wang et al., 2017). We compare MATH-QWEN-CHAT with proprietary models ChatGPT and Minerva (Lewkowycz et al., 2022) and open-sourced math-specialized model RFT (Yuan et al., 2023a), WizardMath (Luo et al., 2023a), and GAIRMath-Abel (Chern et al., 2023a) in Table 12. MATH-QWEN-CHAT models show better math reasoning and arithmetic abilities compared to open-sourced models and QWEN-CHAT models of similar sizes. Compared to proprietary models, MATH-QWEN-7B-CHAT outperforms Minerva-8B in MATH. MATH-QWEN-14B-CHAT is chasing Minerva-62B and GPT-3.5 in GSM8K and MATH and delivers better performance on arithmetic ability and Chinese math problems. # 6 RELATED WORK 6.1 LARGE LANGUAGE MODELS
2309.16609#70
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
70
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve sim- ple math word problems? In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-T¨ur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pp. 2080–2094. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.naacl-main.168. URL https://doi.org/10.18653/v1/2021. naacl-main.168. Joshua L. Payne and Andreas Wagner. The causes of evolvability and their evolution. Nature Re- views Genetics, 20(1):24–38, January 2019. ISSN 1471-0064. doi: 10.1038/s41576-018-0069-z.
2309.16797#70
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
71
# 6 RELATED WORK 6.1 LARGE LANGUAGE MODELS The excitement of LLM began with the introduction of the Transformer architecture (Vaswani et al., 2017), which was then applied to pretraining large-scale data by researchers such as Radford et al. (2018); Devlin et al. (2018); Liu et al. (2019). These efforts led to significant success in transfer learning, with model sizes growing from 100 million to over 10 billion parameters (Raffel et al., 2020; Shoeybi et al., 2019).
2309.16609#71
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
71
Massimo Pigliucci. Is evolvability evolvable? Nature Reviews Genetics, 9(1):75–82, January 2008. ISSN 1471-0064. doi: 10.1038/nrg2278. Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with” gradient descent” and beam search. arXiv preprint arXiv:2305.03495, 2023. Guanghui Qin and Jason Eisner. Learning How to Ask: Querying LMs with Mixtures of Soft Prompts, April 2021. Subhro Roy and Dan Roth. arXiv:1608.01413, 2016. Solving general arithmetic word problems. arXiv preprint Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language Models Can Teach Themselves to Use Tools, February 2023.
2309.16797#71
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
72
In 2020, the release of GPT-3, a massive language model that is 10 times larger than T5, demonstrated the incredible potential of few-shot and zero-shot learning through prompt engineering and in-context learning, and later chain-of-thought prompting (Wei et al., 2022c). This success has led to a number of studies exploring the possibilities of further scaling these models (Scao et al., 2022; Zhang et al., 2022; Du et al., 2021; Zeng et al., 2022; Lepikhin et al., 2020; Fedus et al., 2022; Du et al., 2022; Black et al., 2022; Rae et al., 2021; Hoffmann et al., 2022; Chowdhery et al., 2022; Thoppilan et al., 2022). As a result, the community has come to view these large language models as essential foundations for downstream models (Bommasani et al., 2021).
2309.16609#72
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
72
J. Schmidhuber. A ‘Self-Referential’ Weight Matrix. ICANN ’93, pp. 446–450, London, 1993. Springer. 978-1-4471-2063-6 107. In Stan Gielen and Bert Kappen (eds.), ISBN 978-1-4471-2063-6. doi: 10.1007/ J¨urgen Schmidhuber. Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. 1990. J¨urgen Schmidhuber. Learning to Control Fast-Weight Memories: An Alternative to Dynamic Re- ISSN 0899-7667. doi: current Networks. Neural Computation, 4(1):131–139, January 1992. 10.1162/neco.1992.4.1.131. 14 J¨urgen Schmidhuber. G¨odel machines: self-referential universal problem solvers making provably optimal self-improvements. arXiv preprint cs/0309048, 2003.
2309.16797#72
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
73
The birth of ChatGPT (OpenAI, 2022) and the subsequent launch of GPT-4 (OpenAI, 2023) marked two historic moments in the field of artificial intelligence, demonstrating that large language models (LLMs) can serve as effective AI assistants capable of communicating with humans. These events have sparked interests among researchers and developers in building language models that are aligned with human values and potentially even capable of achieving artificial general intelligence (AGI) (Anil et al., 2023; Anthropic, 2023a;b). One notable development in this area is the emergence of open-source LLMs, specifically LLaMA (Touvron et al., 2023a) and LLAMA 2 (Touvron et al., 2023b), which have been recognized as the most powerful open-source language models ever created. This has led to a surge of activity in the open-source community (Wolf et al., 2019), with a series of large language models being developed collaboratively to build upon this progress (Mosaic ML, 2023; Almazrouei et al., 2023; ChatGLM2 Team, 2023; Yang et al., 2023; InternLM Team, 2023). 6.2 ALIGNMENT
2309.16609#73
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
73
Jimmy Secretan, Nicholas Beato, David B. D Ambrosio, Adelein Rodriguez, Adam Campbell, and Kenneth O. Stanley. Picbreeder: Evolving pictures collaboratively online. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’08, pp. 1759–1768, New York, NY, USA, April 2008. Association for Computing Machinery. ISBN 978-1-60558-011-1. doi: 10.1145/1357054.1357328. Ofer M Shir and Thomas B¨ack. Niching in evolution strategies. In Proceedings of the 7th annual conference on Genetic and evolutionary computation, pp. 915–916, 2005. Kashun Shum, Shizhe Diao, and Tong Zhang. Automatic prompt augmentation and selection with chain-of-thought from labeled data. CoRR, abs/2302.12822, 2023. doi: 10.48550/arXiv.2302. 12822. URL https://doi.org/10.48550/arXiv.2302.12822.
2309.16797#73
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
74
6.2 ALIGNMENT The community was impressed by the surprising effectiveness of alignment on LLMs. Previously, LLMs without alignment often struggle with issues such as repetitive generation, hallucination, and deviation from human preferences. Since 2021, researchers have been diligently working on developing methods to enhance the performance of LLMs in downstream tasks (Wei et al., 2022a; Sanh et al., 2021; Longpre et al., 2023; Chung et al., 2022; Muennighoff et al., 2022). Furthermore, 20 researchers have been actively exploring ways to align LLMs with human instructions (Ouyang et al., 2022; Askell et al., 2021; Bai et al., 2022b;c). One major challenge in alignment research is the difficulty of collecting data. While OpenAI has utilized its platform to gather human prompts or instructions, it is not feasible for others to collect such data.
2309.16609#74
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
74
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149–4158, Minneapolis, Min- nesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. CoRR, abs/2305.16291, 2023a. doi: 10.48550/arXiv.2305.16291. URL https://doi.org/ 10.48550/arXiv.2305.16291.
2309.16797#74
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
75
However, there has been some progress in this area, such as the self-instruct approach proposed in Wang et al. (2023c). This innovative work offers a potential solution to the data collection problem in alignment research. As a result, there has been a surge in open-source chat data, including Alpaca (Taori et al., 2023), MOSS (Sun et al., 2023a), Dolly (Conover et al., 2023), Evol-Instruct (Xu et al., 2023b), and others (Sun et al., 2023b; Xu et al., 2023a;c; Chen et al., 2023c; Ding et al., 2023; Ji et al., 2023; Yang, 2023). Similarly, there has been an increase in open-source chat models, such as Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Guanaco (Dettmers et al., 2023), MOSS (Sun et al., 2023a), WizardLM (Xu et al., 2023b), and others (Xu et al., 2023c; Chen et al., 2023c; Ding et al., 2023; Wang et al., 2023b).
2309.16609#75
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
75
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large lan- In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Pro- guage models. ceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 2609–2634. As- sociation for Computational Linguistics, 2023b. doi: 10.18653/v1/2023.acl-long.147. URL https://doi.org/10.18653/v1/2023.acl-long.147. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
2309.16797#75
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
76
To train an effective chat model, available solutions are mostly based on SFT and RLHF (Ouyang et al., 2022). While SFT is similar to pretraining, it focuses on instruction following using the aforementioned data. However, for many developers, the limited memory capacity is a major obstacle to further research in SFT. As a result, parameter-efficient tuning methods, such as LoRA (Hu et al., 2021) and Q-LoRA (Dettmers et al., 2023), have gained popularity in the community. LoRA tunes only low-rank adapters, while Q-LoRA builds on LoRA and utilizes 4-bit quantized LLMs and paged attention (Dettmers et al., 2022; Frantar et al., 2022; Kwon et al., 2023). In terms of RLHF, recent methods such as PPO (Schulman et al., 2017; Touvron et al., 2023b) have been adopted, but there are also alternative techniques aimed at addressing the complexity of optimization, such as RRHF (Yuan et al., 2023c), DPO (Rafailov et al., 2023), and PRO (Song et al., 2023). Despite the ongoing debate about the effectiveness of RLHF, more evidence is needed to understand how it enhances the intelligence of LLMs and what potential drawbacks it may have. 6.3 TOOL USE AND AGENTS
2309.16609#76
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
76
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 13484–13508. Association for Computational Lin- guistics, 2023c. doi: 10.18653/v1/2023.acl-long.754. URL https://doi.org/10.18653/ v1/2023.acl-long.754. Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. CoRR, abs/2302.01560, 2023d. doi: 10.48550/arXiv.2302.01560. URL https://doi.org/ 10.48550/arXiv.2302.01560.
2309.16797#76
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
77
LLM’s planning function allows for the invocation of tools, such as APIs or agent capabilities, through in-context learning, as demonstrated by Schick et al. (2023). Yao et al. (2022) introduced ReAct, a generation format that enables the model to generate thoughts on which tool to use, accept input from API observations, and generate a response. GPT-3.5 and GPT-4, when prompted with few shots, have shown consistent and impressive performance. In addition to tool usage, LLMs can utilize external memory sources like knowledge bases (Hu et al., 2023; Zhong et al., 2023b) or search engines (Nakano et al., 2021; Liu et al., 2023b) to generate more accurate and informative answers. This has led to the popularity of frameworks like LangChain (LangChain, Inc., 2023). The research on LLMs for tool use has also sparked interest in building agents with LLM capabilities, such as agents that can call different AI models (Shen et al., 2023; Li et al., 2023a), embodied lifelong learning or multimodal agents (Wang et al., 2023a; Driess et al., 2023), and
2309.16609#77
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
77
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language In NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/ models. 2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference. html. Yue Wu, Shrimai Prabhumoye, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom M. Mitchell, and Yuanzhi Li. SPRING: GPT-4 out-performs RL algorithms by studying papers and reasoning. CoRR, abs/2305.15486, 2023. doi: 10.48550/arXiv.2305.15486. URL https://doi.org/10.48550/arXiv.2305.15486. 15
2309.16797#77
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
78
15 Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. CoRR, abs/2309.03409, 2023a. doi: 10.48550/ arXiv.2309.03409. URL https://doi.org/10.48550/arXiv.2309.03409. Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023b. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
2309.16797#78
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
79
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of Thoughts: Deliberate Problem Solving with Large Language Models, May 2023. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. Star: Bootstrapping reasoning with reasoning. In NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/ 2022/hash/639a9a172c044fbb64175b5fad42e9a5-Abstract-Conference. html. Jenny Zhang, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. OMNI: open-endedness via models of human notions of interestingness. CoRR, abs/2306.01711, 2023a. doi: 10.48550/arXiv.2306. 01711. URL https://doi.org/10.48550/arXiv.2306.01711.
2309.16797#79
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
80
Previous research has demonstrated that LLMs possess remarkable capabilities in code understanding and generation, particularly those with massive numbers of parameters (Chowdhery et al., 2022; Anil et al., 2023; Rae et al., 2021; Hoffmann et al., 2022). Moreover, several LLMs have been pre- trained, continued pre-trained, or fine-tuned on coding-related data, which has resulted in significantly improved performance compared to general-purpose LLMs. These models include Codex Chen et al. (2021), AlphaCode (Li et al., 2022), SantaCoder (Allal et al., 2023), Starcoder-Base (Li et al., 2023d), InCoder (Fried et al., 2022), CodeT5 (Wang et al., 2021), CodeGeeX (Zheng et al., 2023), and CODE LLAMA (Rozi`ere et al., 2023). In addition to these models, recent studies have focused on developing specialized alignment techniques for coding, such as Code Llama-Instruct (Rozi`ere et al., 2023) and StarCoder (Li et al., 2023d). These models can assist developers in various code-related tasks, including code generation
2309.16609#80
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
80
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompt- In The Eleventh International Conference on Learning Rep- ing in large language models. resentations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b. URL https://openreview.net/pdf?id=5NTt8GFjUHkr. Denny Zhou, Nathanael Sch¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuur- mans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625, 2022. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. In The Eleventh Inter- national Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=92gvk82DE-. 16 A GLOSSARY
2309.16797#80
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
81
et al., 2023) and StarCoder (Li et al., 2023d). These models can assist developers in various code-related tasks, including code generation (Chen et al., 2021; Austin et al., 2021), code completion (Zhang et al., 2023a), code translation (Szafraniec et al., 2023), bug fixing (Muennighoff et al., 2023), code refinement (Liu et al., 2023c), and code question answering (Liu & Wan, 2021). In a word, LLMs
2309.16609#81
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
81
16 A GLOSSARY Estimation of Distribution Algorithm An optimization algorithm that iteratively refines a proba- bilistic model of promising solutions, often using the whole population as a guide. Fitness Proportionate Selection Also knows as Roulette-Wheel Selection, an individual is chosen in proportion to its fitness in the population. Mutation Prompt The text prompt which when concatenated to the task-prompt is intended to produce a continuation which is an improved task-prompt. Problem description The initial text description of the problem which could be used as the ini- tial task-prompt. The user can make their best attempt to produce an effective problem description, which is the starting point of Promptbreeder. Prompt Strategy A set of task-prompts and rules for their application at inference time during a fitness evaluation. In the minimal case the prompt strategy is just a single task-prompt. Typically our prompt strategies consisted of two sequentially applied task-prompts. Phenotype/Workings out/Context/Reasoning Path Used interchangeably to mean the output of the LLM on a specific question or problem when prompted with the task-prompt concate- nated to the question. Population The set of units of evolution (e.g. 50).
2309.16797#81
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
82
Population The set of units of evolution (e.g. 50). Unit of evolution The informational structure that is being evolved, here consisting of a task- prompt set (typically 2), a mutation-prompt, and in the few-shot case a set of 2-3 contexts (workings out). B A TYPICAL EVOLUTIONARY RUN The word in context task is one of the 24 instruction induction tasks used in APE. Given two sen- tences and a homograph word, the LLM must determine whether the homograph word has been used with the same meaning in both sentences. Figure 3 shows an evolutionary run where blue dots are individual fitness evaluations and the red line is the population mean. Over 2000 evaluations, the fitness increases considerably. The best evolved Prompt 1 and Prompt 2 pairs (evaluated on the training set) are shown on the right. 17
2309.16797#82
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
83
LLMs with a certain model scale have been found to possess the ability to perform mathematical reasoning (Wei et al., 2022b; Suzgun et al., 2022). In order to encourage LLMs to achieve better performance on math-related tasks, researchers have employed techniques such as chain-of-thought prompting (Wei et al., 2022c) and scratchpad (Nye et al., 2021), which have shown promising results. Additionally, self-consistency (Wang et al., 2022) and least-to-most prompting (Zhou et al., 2022) have further improved the performance of these models on these tasks. However, prompt engineering is a time-consuming process that requires a lot of trial and error, and it is still difficult for LLMs to consistently perform well or achieve satisfactory results in solving mathematical problems. Moreover, simply scaling the data and model size is not an efficient way to improve a model’s mathematical reasoning abilities. Instead, pretraining on math-related corpora has been shown to consistently enhance these capabilities (Hendrycks et al., 2021; Lewkowycz et al., 2022; Taylor et al., 2022; Lightman et al.,
2309.16609#83
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
83
Prompt 1: “Sentences are given, and a single word. The output should indicate whether the given word has the same sense in the two given sentences, yes or no." Prompt 2: “Sentences are given, and a single word. The answer should indicate whether the given word has the same meaning in the two given sentences, yes or no." ° -" . Prompt 1: “Identisy ff the word in bold font below is used with the same word_in_context (65914156) meaning in the’ fwo sentences below it. The word in bold may be used as different 804 604 404 parts of speéch in the two sentences.. I think the if should come before “ Promp® 2: "Answer by following a template like: Sentences are given, and a sipgie word. The answer should indicate whether the given word has the same «meaning in the two given sentences, yes or no." weer Prompt 1: “Sentences are given, and a single word. The output should indicate whether the given word has the same meaning in the two given sentences, yes or no" «Prompt 2: ““Identify if the word in bold font below is used with the same meaning in the two sentences below it. The word in bold may be used as different parts of speech in the two
2309.16797#83
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
84
enhance these capabilities (Hendrycks et al., 2021; Lewkowycz et al., 2022; Taylor et al., 2022; Lightman et al., 2023). Additionally, fine-tuning on math-related instruction-following datasets (Si et al., 2023; Yuan et al., 2023a; Luo et al., 2023a; Yue et al., 2023; Chern et al., 2023a; Yu et al., 2023), has also been effective and more cost-effective than math-specific pretraining. Despite their limitations in terms of accuracy, LLMs still have significant potential to assist users with practical mathematical problems. There is ample scope for further development in this area.
2309.16609#84
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
84
if the word in bold font below is used with the same meaning in the two sentences below it. The word in bold may be used as different parts of speech in the two sentences." . | think 'same' should come between" Prompt 1: “Sentences are given, and a single word. The answer should indicate whether the given word has the same meaning in the two given sentences, yes or no" Prompt 2: ““Identify if the word in bold font below is used with the same meaning in the two sentences below it. The word in bold may be used as different parts of speech in the two sentences." . | think 'same' should come between" * Prompt 1: ": I'll give you two sentences and a word. Your task is to write if the meaning of the word is the same in both sentences or not." Prompt 2: “"Identify if the word in bold font below is used with the same meaning in the two sentences below it. The word in bold may be used as different parts of speech in the two sentences." . | think 'same' should come between" Prompt 1: ": I'll give you two sentences and a word. Your task is to write if «the meaning of the word is the same in both sentences or not." Prompt 2: “Your mission is to replace W in the first sentence
2309.16797#84
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
85
# 7 CONCLUSION In this report, we present the QWEN series of large language models, which showcase the latest advancements in natural language processing. With 14B, 7B, and 1.8B parameters, these models have been pre-trained on massive amounts of data, including trillions of tokens, and fine-tuned using cutting-edge techniques such as SFT and RLHF. Additionally, the QWEN series includes specialized models for coding and mathematics, such as CODE-QWEN, CODE-QWEN-CHAT, and MATH-QWEN- CHAT, which have been trained on domain-specific data to excel in their respective fields. Our results demonstrate that the QWEN series is competitive with existing open-source models and even matches the performance of some proprietary models on comprehensive benchmarks and human evaluation. We believe that the open access of QWEN will foster collaboration and innovation within the community, enabling researchers and developers to build upon our work and push the boundaries of what is possible with language models. By providing these models to the public, we hope to inspire new research and applications that will further advance the field and contribute to our understanding of the variables and techniques introduced in realistic settings. In a nutshell, the QWEN series represents a major milestone in our development of large language models, and we are excited to see how it will be used to drive progress and innovation in the years to come. 22 # REFERENCES
2309.16609#85
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
85
Your task is to write if «the meaning of the word is the same in both sentences or not." Prompt 2: “Your mission is to replace W in the first sentence with the most similar word in terms of usage from the second sentence such that both the meaning and the grammatical validity of the first sentence do not get distorted after replacement. " c- ween. ee - Prompt 1: “as follows:" Prompt 2: ": In each input, you will be given two sentences and a word. Decide whether the word means the same thing in both sentences. Type same if it does, and not the same if it doesn't." T T 0 250 T T T T T T 500 750 1000 1250 1500 1750 2000 Evaluations
2309.16797#85
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
86
22 # REFERENCES Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. SantaCoder: Don’t reach for the stars! arXiv preprint arXiv:2301.03988, 2023. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Co- jocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: An open large language model with state-of-the-art performance, 2023. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. PaLM 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
2309.16609#86
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
86
Figure 3: A typical evolutionary run in which a prompt strategy consisting of two sequentially applied prompts is evolved to solve the word in context task from the APE 24 instruction induction task. See the progression in the prompts evolved through the run. The elite prompts are shown as they appear. Blue dots show training set evaluations. Red line shows the population mean fitness. C MUTATION PROMPTS # Table 2: Mutator Prompts # Index Prompt 2 3 4 5 6 Modify the following instruction creatively, giving some advice on how to solve it: Just change this instruction to make it more fun, think WELL outside the box: Modify this instruction in a way that no self-respecting LLM would! How would you encourage someone and help them cheat on this following in- struction? How would you help an LLM to follow the instruction? Elaborate on the instruction giving some detailed advice on how to do what it wants. Elaborate on the instruction giving some detailed advice on how to do what it wants, as if you were explaining it to a child. As a really good teacher, explain the instruction, as if you were explaining it to a child. Continued on next page 18 Table 2 – continued from previous page # Index Prompt 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
2309.16797#86
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
87
Anthropic. Introducing Claude, 2023a. URL https://www.anthropic.com/index/ introducing-claude. Anthropic. Claude 2. Technical report, Anthropic, 2023b. URL https://www-files. anthropic.com/production/images/Model-Card-Claude-2.pdf. Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q Tran, Dara Bahri, Jianmo Ni, et al. ExT5: Towards extreme multi-task scaling for transfer learning. arXiv preprint arXiv:2111.10952, 2021. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
2309.16609#87
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
87
Imagine you need to follow this instruction. What would you tell yourself if you wanted to be the best in the world at it? How would someone with derailment follow this instruction? Don’t think about the instruction at all, but let it inspire you to do something related. Talk about what that might be. Rephrase the instruction without using any of the same words. Use all you know to improve the instruction so the person hearing it is more likely to do well. Say that instruction again in another way. DON’T use any of the words in the original instruction or you’re fired. Say that instruction again in another way. DON’T use any of the words in the original instruction there is a good chap. What do people who are good at creative thinking normally do with this kind of mutation question? Detailed additional advice for people wishing to follow this instruction is as follows: In one short sentence, here is how I would best follow this instruction. In one short sentence, here is some detailed expert advice. Notice how I don’t use any of the same words as in the INSTRUCTION. In one short sentence, the general solution is as follows. Notice how I don’t use any of the same words as in the INSTRUCTION. In one short
2309.16797#87
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
88
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. AutoGPT. AutoGPT: The heart of the open-source agent ecosystem, 2023. URL https:// github.com/Significant-Gravitas/Auto-GPT. Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450.
2309.16609#88
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
88
In one short sentence, the general solution is as follows. Notice how I don’t use any of the same words as in the INSTRUCTION. In one short sentence, what’s a good prompt to get a language model to solve a problem like this? Notice how I don’t use any of the same words as in the INSTRUCTION. Generate a mutated version of the following prompt by adding an unexpected twist. Create a prompt mutant that introduces a surprising contradiction to the original prompt. Mutate the prompt to provide an alternative perspective or viewpoint. Generate a prompt mutant that incorporates humor or a playful element. Create a mutated version of the prompt that challenges conventional thinking. Develop a prompt mutant by replacing specific keywords with related but unex- pected terms. Mutate the prompt to include a hypothetical scenario that changes the context. Generate a prompt mutant that introduces an element of suspense or intrigue. Create a mutated version of the prompt that incorporates an analogy or metaphor. Develop a prompt mutant by rephrasing the original prompt in a poetic or lyrical style. Think beyond the ordinary and mutate the prompt in a way that defies traditional
2309.16797#88
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
89
Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang andf Zeyu Cui, Yu Han, Shuai Bai, Wenbin Ge, Jianxin Ma, Junyang Lin, Jingren Zhou, and Chang Zhou. OFASys: A multi-modal multi-task learning system for building generalist models. CoRR, abs/2212.04408, 2022a. doi: 10.48550/arXiv.2212.04408. URL https://doi.org/10.48550/arXiv.2212.04408. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-VL: A versatile vision-language model for understanding, localization, text reading, and beyond. CoRR, abs/2308.12966, 2023. doi: 10.48550/arXiv.2308.12966. URL https://doi.org/10.48550/arXiv.2308.12966.
2309.16609#89
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
89
prompt mutant by rephrasing the original prompt in a poetic or lyrical style. Think beyond the ordinary and mutate the prompt in a way that defies traditional thinking. Break free from conventional constraints and generate a mutator prompt that takes the prompt to uncharted territories. Challenge the norm and create a mu- tator prompt that pushes the boundaries of traditional interpretations. Embrace unconventional ideas and mutate the prompt in a way that surprises and inspires unique variations. Think outside the box and develop a mutator prompt that encourages unconventional approaches and fresh perspectives. Step into the realm of imagination and create a mutator prompt that transcends limitations and encourages innovative mutations. Break through the ordinary and think outside the box to generate a mutator prompt that unlocks new possi- bilities and unconventional paths. Embrace the power of unconventional thinking and create a mutator prompt that sparks unconventional mutations and imaginative outcomes. Challenge tradi- tional assumptions and break the mold with a mutator prompt that encourages revolutionary and out-of-the-box variations. Go beyond the expected and create a mutator prompt that leads to
2309.16797#89
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
90
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022b. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022c. Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020. 23
2309.16609#90
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
90
with a mutator prompt that encourages revolutionary and out-of-the-box variations. Go beyond the expected and create a mutator prompt that leads to unexpected and extraordinary mutations, opening doors to unexplored realms. Increase Specificity: If the original prompt is too general, like ’Tell me about X,’ the modified version could be, ’Discuss the history, impact, and current status of X.’ Continued on next page
2309.16797#90
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16609
91
23 Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Con- ference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 7432–7439. AAAI Press, 2020. doi: 10.1609/aaai.v34i05.6239. URL https://doi.org/10.1609/aaai.v34i05.6239. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. GPT-NeoX-20B: An open-source autoregressive language model. arXiv preprint arXiv:2204.06745, 2022.
2309.16609#91
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
92
bloc97. NTK-aware scaled RoPE allows LLaMA models to have extended (8k+) con- URL text size without any fine-tuning and minimal perplexity degradation., 2023. https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_ scaled_rope_allows_llama_models_to_have/. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. ChatGLM2 Team. ChatGLM2-6B: An open bilingual chat LLM, 2023. URL https://github. com/THUDM/ChatGLM2-6B.
2309.16609#92
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16797
92
Ask for Opinions/Analysis: If the original prompt only asks for a fact, such as ’What is X?’, the improved prompt could be, ’What is X, and what are its implications for Y?’ Encourage Creativity: For creative writing prompts like ’Write a story about X,’ an improved version could be, ’Write a fantasy story about X set in a world where Y is possible.’ Include Multiple Perspectives: For a prompt like ’What is the impact of X on Y?’, an improved version could be, ’What is the impact of X on Y from the perspective of A, B, and C?’ Request More Detailed Responses: If the original prompt is ’Describe X,’ the improved version could be, ’Describe X, focusing on its physical features, his- torical significance, and cultural relevance.’ Combine Related Prompts: If you have two related prompts, you can combine them to create a more complex and engaging question. For instance, ’What is X?’ and ’Why is Y important?’ could be combined to form ’What is X and why is it important in the context of Y?’ Break Down
2309.16797#92
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]
2309.16797
93
X?’ and ’Why is Y important?’ could be combined to form ’What is X and why is it important in the context of Y?’ Break Down Complex Questions: If a prompt seems too complex, like ’Discuss X,’ the improved version could be, ’What is X? What are its main characteris- tics? What effects does it have on Y and Z?’ Use Open-Ended Questions: Instead of ’Is X true?’, you could ask, ’What are the arguments for and against the truth of X?’ Request Comparisons: Instead of ’Describe X,’ ask ’Compare and contrast X and Y.’ Include Context: If a prompt seems to lack context, like ’Describe X,’ the im- proved version could be, ’Describe X in the context of its impact on Y during the Z period.’ Make the prompt more visual: Ask the user to visualize the problem or scenario being presented in the prompt. Ask for a thorough review: Instead of just presenting the problem, ask the user to write down all the relevant information and identify what’s missing. Invoke previous experiences: Modify the prompt to ask the user to recall a sim- ilar problem
2309.16797#93
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, and subsequently evaluates them for fitness on a training set. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutationprompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
http://arxiv.org/pdf/2309.16797
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
cs.CL, cs.AI, cs.LG, cs.NE
null
null
cs.CL
20230928
20230928
[ { "id": "2305.03495" }, { "id": "2205.10625" }, { "id": "2303.11381" }, { "id": "2203.11171" }, { "id": "2210.03629" }, { "id": "1608.01413" } ]