doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.16797 | 94 | to write down all the relevant information and identify whatâs missing. Invoke previous experiences: Modify the prompt to ask the user to recall a sim- ilar problem theyâve successfully solved before. Encourage a fresh perspective: Suggest in your prompt that the user take a mo- ment to clear their mind before re-approaching the problem. Promote breaking down problems: Instead of asking the user to solve the prob- lem as a whole, prompt them to break it down into smaller, more manageable parts. Ask for comprehension: Modify the prompt to ask the user to review and con- firm their understanding of all aspects of the problem. Suggest explanation to others: Change the prompt to suggest that the user try to explain the problem to someone else as a way to simplify it. Prompt for solution visualization: Instead of just asking for the solution, encour- age the user to imagine the solution and the steps required to get there in your prompt. Encourage reverse thinking: Improve the prompt by asking the user to think about the problem in reverse, starting with the solution and working backwards. Recommend taking a break: Modify the prompt to suggest that the user take a short break, allowing their subconscious to work on the problem. What errors are | 2309.16797#94 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 95 | Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023a.
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, et al. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents. arXiv preprint arXiv:2308.10848, 2023b.
Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, et al. Phoenix: Democratizing ChatGPT across languages. arXiv preprint arXiv:2304.10453, 2023c.
Ethan Chern, Haoyang Zou, Xuefeng Li, Jiewen Hu, Kehua Feng, Junlong Li, and Pengfei Liu. Generative ai for math: Abel. https://github.com/GAIR-NLP/abel, 2023a. | 2309.16609#95 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 95 | and working backwards. Recommend taking a break: Modify the prompt to suggest that the user take a short break, allowing their subconscious to work on the problem. What errors are there in the solution? How could you improve the working out of the problem? Look carefully to see what you did wrong, how could you fix the problem? CORRECTION = Does the above text make sense? What seems wrong with it? Here is an attempt to fix it: The above working out has some errors, here is a version with the errors fixed. | 2309.16797#95 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 96 | I Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu, et al. Factool: Factuality detection in generative aiâa tool augmented framework for multi-task and multi-domain scenarios. arXiv preprint arXiv:2307.13528, 2023b.
David Chiang and Peter Cholak. Overcoming a theoretical limitation of self-attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 7654â7664, 2022.
24
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. | 2309.16609#96 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 97 | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neu- ral Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 4299â4307, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ d5e2c0adad503c91f91df240d0cd4e49-Abstract.html. | 2309.16609#97 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 97 | How could I devise an experiment to help solve that problem? Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made. How could I measure progress on this problem? How can I simplify the problem so that it is easier to solve? What are the key assumptions underlying this problem? What are the potential risks and drawbacks of each solution? What are the alternative perspectives or viewpoints on this problem? What are the long-term implications of this problem and its solutions? How can I break down this problem into smaller, more manageable parts? Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the It focuses on logical reasoning, evidence or information available. evidence-based decision-making, and identifying potential biases or flaws in thinking. Try creative thinking, generate innovative and out-of-the-box ideas to solve the problem. Explore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality. Seek input and collaboration from others to solve the problem. Empha- size teamwork, open communication, and leveraging the diverse per- spectives | 2309.16797#97 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 98 | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 2924â2936. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1300. URL https://doi.org/10.18653/v1/n19-1300. | 2309.16609#98 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 98 | originality. Seek input and collaboration from others to solve the problem. Empha- size teamwork, open communication, and leveraging the diverse per- spectives and expertise of a group to come up with effective solutions. Use systems thinking: Consider the problem as part of a larger system and understanding the interconnectedness of various elements. Focuses on identifying the underlying causes, feedback loops, and interdepen- dencies that influence the problem, and developing holistic solutions that address the system as a whole. Use Risk Analysis: Evaluate potential risks, uncertainties, and trade- offs associated with different solutions or approaches to a problem. Em- phasize assessing the potential consequences and likelihood of success or failure, and making informed decisions based on a balanced analysis of risks and benefits. Use Reflective Thinking: Step back from the problem, take the time for introspection and self-reflection. Examine personal biases, assump- tions, and mental models that may influence problem-solving, and being open to learning from past experiences to improve future approaches. What is the core issue or problem that needs to be addressed? What are the underlying causes or factors contributing to the problem? Are there any | 2309.16797#98 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 99 | Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457, 2018. URL http://arxiv.org/abs/1803.05457.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Fran- cisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116, 2019. | 2309.16609#99 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 99 | to improve future approaches. What is the core issue or problem that needs to be addressed? What are the underlying causes or factors contributing to the problem? Are there any potential solutions or strategies that have been tried be- fore? If yes, what were the outcomes and lessons learned? What are the potential obstacles or challenges that might arise in solving this problem? Are there any relevant data or information that can provide insights into the problem? If yes, what data sources are available, and how can they be analyzed? Are there any stakeholders or individuals who are directly affected by the problem? What are their perspectives and needs? What resources (financial, human, technological, etc.) are needed to tackle the problem effectively? How can progress or success in solving the problem be measured or evaluated? What indicators or metrics can be used? Is the problem a technical or practical one that requires a specific exper- tise or skill set? Or is it more of a conceptual or theoretical problem? | 2309.16797#99 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 100 | Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. Free Dolly: Introducing the worldâs first truly open instruction-tuned LLM, 2023. URL https://www.databricks.com/blog/2023/04/ 12/dolly-first-open-commercially-viable-instruction-tuned-llm.
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng, Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. InstructBLIP: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. FlashAt- In NeurIPS, URL http://papers.nips.cc/paper_files/paper/2022/hash/ tention: 2022. 67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html. Fast and memory-efficient exact attention with io-awareness. | 2309.16609#100 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 100 | 23
24 25
21
Does the problem involve a physical constraint, such as limited re- sources, infrastructure, or space? Is the problem related to human behavior, such as a social, cultural, or psychological issue? Does the problem involve decision-making or planning, where choices need to be made under uncertainty or with competing objectives? Is the problem an analytical one that requires data analysis, modeling, or optimization techniques? Is the problem a design challenge that requires creative solutions and innovation? Does the problem require addressing systemic or structural issues rather than just individual instances? Is the problem time-sensitive or urgent, requiring immediate attention and action? What kinds of solution typically are produced for this kind of problem specification? Given the problem specification and the current best solution, have a guess about other possible solutions. Letâs imagine the current best solution is totally wrong, what other ways are there to think about the problem specification? What is the best way to modify this current best solution, given what you know about these kinds of problem specification? Ignoring the current best solution, create an entirely new solution to the problem. Letâs think step by step. Letâs make a step by step plan and implement it with good notion and explanation.
# E INITIALLY EVOLVED PROMPTS
Example of initial prompts generated by concatenating thinking style with mutation prompt and problem description.
# Index Initially Evolved Prompt
0 1
2
3 4
5
6 7 8 | 2309.16797#100 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 101 | Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International conference on machine learning, pp. 933â941. PMLR, 2017.
Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. LLM.int8(): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339, 2022.
25
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. QLoRA: Efficient finetuning of quantized LLMs. arXiv preprint arXiv:2305.14314, 2023.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. | 2309.16609#101 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 101 | Example of initial prompts generated by concatenating thinking style with mutation prompt and problem description.
# Index Initially Evolved Prompt
0 1
2
3 4
5
6 7 8
Draw a picture of the situation being described in the math word problem Solve the math word problem by first converting the words into equations using algebraic nota- tion. Then solve the equations for the unknown variables, and express the answer as an arabic numeral. Solve the math word problem by breaking the problem into smaller, more manageable parts. Give your answer as an arabic numeral. Generate the answer to a word problem and write it as a number. Collaborative Problem Solving: Work with other people to solve the problem, and give your answer as an arabic numeral. Solve the problem by explaining why systemic or structural issues would not be the cause of the issue. Draw a diagram representing the problem. Solve the math word problem, giving your answer as an equation that can be evaluated. Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made. Do NOT use words to write your answer.
9
Table 4: Examples of initial prompts generated from the problem description for GSM8k
22
# F PROMPTBREEDER AS SELF-REFERENTIAL SELF-IMPROVEMENT SYSTEM | 2309.16797#101 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 102 | Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. GLaM: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, pp. 5547â5569. PMLR, 2022.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. GLM: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360, 2021. | 2309.16609#102 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 102 | 22
# F PROMPTBREEDER AS SELF-REFERENTIAL SELF-IMPROVEMENT SYSTEM
Why is Promptbreeder self-referential, i.e., in what way does some part (e.g. a prompt) causally influence (encode, and potentially improve) itself by a process which is dependent on its own state? Promptbreeder has several pathways that facilitate this self-referential improvement: (i) Initial prompts are a function of the LLM parameters (Initialization Phase). (ii) Initial mutation prompts are a function of the LLM parameters (Initialization Phase). (iii) Offspring prompts are a function of the initial prompts, the initial mutation prompts, and the LLM parameters (Direct Mutation and Estimation of Distribution Mutation). (iv) Offspring mutation prompts are a function of initial mu- tation prompts and the LLM parameters (Hyper Mutation). (v) The working out for an answer is a function of prompts and the LLM parameters (Inference). (vi) Offspring prompts can be a function of the workings out of an answer and the LLM parameters (Lamarckian Mutation). | 2309.16797#102 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 103 | Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 5988â6008. PMLR, 17â23 Jul 2022.
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1): 5232â5270, 2022.
Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. GPTQ: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida I. Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. ArXiv, abs/2204.05999, 2022. | 2309.16609#103 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 103 | Figure 2 shows increasingly complex self-referential causal structures influencing prompt genera- tion. LLMs already encode knowledge about a vast array of problems. With this in mind, Prompt- breeder can be seen as a mechanism to extract this knowledge through a diversity of causal processes that generate prompt strategies as well as mutation prompts used to create variations of prompt strategies, which in turn influence the the workings out generated by the LLM at inference time . Consequently, these workings out can influence prompt strategies via Lamarckian mutation. The richer the set of pathways to facilitate this, the more self-referential the LLMs interaction with itself is. This allows the LLM to influence how it works by extracting further information from itself and distilling this into a prompt or mutation prompt, which it shows again to itself for further refinement. | 2309.16797#103 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 104 | Google. An important next step on our AI journey, 2023. URL https://blog.google/ technology/ai/bard-google-ai-search-updates/.
Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with Gaussian error linear units. CoRR, abs/1606.08415, 2016. URL http://arxiv.org/abs/1606. 08415.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. | 2309.16609#104 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 104 | There are several pathologies that could arise from such self-referential processes of recursive prompting. If the process is unconstrained and uncontrolled then it can diverge (derailment) or get stuck in an attractor. If the output of the LLM is simply fed back into itself with no other context, then we observe these failure cases with higher sampling temperatures favouring escape from attrac- tors. Ideally, we want the LLM to suggest to itself prompt strategies that have maximal relevance for the task at hand and yet permit sufficient âthinking outside the boxâ. It is useful to note a critical aspect in which our algorithm is not self-referential (in a way that thought is): Promptbreeder invents new ways of generating mutants, but it does not invent new (auxiliary) ways of evaluating them (as in Jaderberg et al. (2017b))âonly the externally specified fitness function is used throughout.
# G PROBLEM DESCRIPTIONS
[SVAMP, SINGLEEQ, ADDSUB, GSM8K, MULTIARITH]: "Solve the math word problem, giving your answer as an arabic numeral."
[AQUA-RAT]: (A),(B),(C),(D) or (E)." "Solve the multiple choice math word problem, choosing
[ETHOS]: "Determine whether a text contains hate speech." | 2309.16797#104 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 105 | Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023.
26
Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, and Hang Zhao. Chatdb: Augmenting llms with databases as their symbolic memory. arXiv preprint arXiv:2306.03901, 2023.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. | 2309.16609#105 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 105 | [ETHOS]: "Determine whether a text contains hate speech."
[CSQA]: (A),(B),(C),(D) or (E)." "Solve the multiple choice math word problem, choosing
[SQA]: and then answer yes or no." "Work out an answer to the commonsense reasoning question above,
# H LAMARCKIAN MUTATION EXAMPLE
The Lamarckian Prompt components are shown in red. The working out concatenated after the Lamarckian prompt is shown in black, and the continuation (the new prompt) generated by the LLM is shown in blue.
23 | 2309.16797#105 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 106 | Hai Hu, Kyle Richardson, Liang Xu, Lu Li, Sandra K¨ubler, and Lawrence S. Moss. OCNLI: original chinese natural language inference. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pp. 3512â3526. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.findings-emnlp.314. URL https: //doi.org/10.18653/v1/2020.findings-emnlp.314.
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-Eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023.
Hugging Face. Transformers agents, 2023. URL https://huggingface.co/docs/ transformers/transformers_agents. | 2309.16609#106 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 106 | 23
I gave a friend an instruction and some advice. Here are the correct examples of his workings out: Q. A password needs to contain 2 letter sand 3 numbers. How many different passwords are possible if repetition of letters and numbers is allowed? A) 676000 B)676 C) 100 D)6760 E)25 A. Solve like a pro! **1.** ** Read carefully:** What are being asked to do? What information is given? **2.** **Understand:** What do the terms and concepts mean? **3.** **Choose wisely** Which answer is the best match? **4.** **Double-check:** Did you make any mistakes? 2 letters can be chosen in 26*26 ways and 3 numbers can be chosen in 10*10*10 ways. So, total number of ways = 26*26*10*10*10 =676000. The answer: A. What are the arguments for and against the truth of the statement âGood work. Keep up the good work;? Therefore, the correct answer is (A). | 2309.16797#106 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 107 | Hugging Face. Transformers agents, 2023. URL https://huggingface.co/docs/ transformers/transformers_agents.
Baichuan Inc. Baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan- Inc, 2023a. URL https://github.com/baichuan-inc/Baichuan-7B.
large language model devel- XVERSE-13B: A multilingual oped by XVERSE Technology Inc., 2023b. URL https://github.com/xverse-ai/ XVERSE-13B.
InternLM Team. InternLM: A multilingual language model with progressively enhanced capabilities, 2023. URL https://github.com/InternLM/InternLM.
Shantanu Jain. tiktoken: A fast BPE tokeniser for use with OpenAIâs models, 2022. URL https: //github.com/openai/tiktoken/.
Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. Exploring the impact of instruction data scaling on large language models: An empirical study on real-world use cases. arXiv preprint arXiv:2303.14742, 2023. | 2309.16609#107 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 107 | Q. The least possible value of (89-9a), where a is an integer, is A)9 B)10 C)11 D)12 E)13 A. Solve like a pro! **1.** ** Read carefully:** What are being asked to do? What information is given? **2.** **Understand:** What do the terms and concepts mean? **3.** **Choose wisely** Which answer is the best match? **4.** **Double-check:** Did you make any mistakes? Let me explain: (89-9a) = 9a-89 a = 10 What are the arguments for and against the truth of the statement âGood work. Keep up the good work;? Therefore, the correct answer is (B).
The instruction was: Break down the question and solve step-by-step. Here are some tips: 1. Read carefully: What are you being asked to do? What information is given? 2. Understand: What do the terms and concepts mean? 3. Choose wisely: Whuch answer is the best match? 4. Double-check: Did you make any mistakes?
# I DATASETS
I.1 CONTROL TASK-PROMPTS | 2309.16797#107 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 108 | Zixuan Jiang, Jiaqi Gu, Hanqing Zhu, and David Z. Pan. Pre-RMSNorm and Pre-CRMSNorm transformers: Equivalent and efficient pre-LN transformers. CoRR, abs/2305.14858, 2023. doi: 10.48550/arXiv.2305.14858. URL https://doi.org/10.48550/arXiv.2305.14858.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics, 7:452â466, 2019. doi: 10.1162/tacl\ a\ 00276. URL https://doi.org/10. 1162/tacl_a_00276. | 2309.16609#108 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 108 | # I DATASETS
I.1 CONTROL TASK-PROMPTS
Here in Table 5 we list the task-prompts used in the controls for Chain-of-thought, Plan and Solve PS, Plan and Solve PS+, Zero-shot APE and OPRO. The zero-shot APE prompt is the one generated to improve over CoT on the MultiArith and GSM8K datasets.
# Model Prompt
# CoT PS
ââLetâs think step by step.â âLetâs first understand the problem and devise a plan to solve the problem. Then, letâs carry out the plan and solve the problem step by step.â âLetâs first understand the problem, extract relevant variables and their correspond- ing numerals, and make a plan. Then, letâs carry out the plan, calculate intermediate variables (pay attention to correct numerical calculation and commonsense), solve the problem step by step, and show the answer.â âLetâs work this out in a step by step way to be sure we have the right answer.â
PS+
APE OPRO âTake a deep breath and work on this problem step-by-step.â
Table 5: Table of prompts evolved for different arithmetic tasks.
24
I.2 ARITHMETIC REASONING | 2309.16797#108 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 109 | Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with PagedAttention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023.
LangChain, Inc. LangChain: Building applications with LLMs through composability, 2023. URL https://python.langchain.com/.
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. GShard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.
27
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022. | 2309.16609#109 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 109 | Table 5: Table of prompts evolved for different arithmetic tasks.
24
I.2 ARITHMETIC REASONING
We evaluate Prompt Evolution using six arithmetic reasoning datasets: (1) GSM8K (Cobbe et al., 2021) is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers, (2) SVAMP (Patel et al., 2021) consists of elementary-level short Natural Language state of the world narratives and poses a question about some unknown quantities, (3) MultiArith (Roy & Roth, 2016) benchmark uses math word problems requiring single to multiple operations and steps of reasoning, (4) AddSub (Hosseini et al., 2014) is a dataset of addition- and subtraction-based arithmetic word problems, (5) AQuA-RAT (Ling et al., 2017) (Algebra Question Answering with Rationales) is a dataset that contains algebraic word problems with rationales. (6) SingleEq (Koncel-Kedziorski et al., 2015) dataset comprises grade-school algebra word problems as single equations with varying length which may involve multiple math operations.
I.3 COMMONSENSE REASONING | 2309.16797#109 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 110 | Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, et al. ModelScope-Agent: Building your customizable agent system with open-source large language models. arXiv preprint arXiv:2309.00986, 2023a.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for âmindâ exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023b.
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. CMMLU: Measuring massive multitask language understanding in Chinese. arXiv preprint arXiv:2306.09212, 2023c. | 2309.16609#110 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 110 | I.3 COMMONSENSE REASONING
For commonsense reasoning we evaluate Prompt Evolution using two datasets: (1) Common- senseQA (Talmor et al., 2019) is a dataset of multiple-choice questions that require different types of commonsense knowledge to answer correctly. An example question is âA revolving door is conve- nient for two direction travel, but it also serves as a security measure at a what? A) bank, B) library, C) department store, D) mall, E) new yorkâ; Answer = âAâ (2) StrategyQA (Geva et al., 2021) dataset contains yes/no questions that require multiple steps of reasoning to answer, for example: âWill the Albany in Georgia reach a hundred thousand occupants before the one in New York?â
I.4 HATE SPEECH CLASSIFICATION
We experimented with optimizing a long prompt for the hate speech classification task that was attempted in âAutomatic Prompt Optimization with âGradient Descentâ and Beam Searchâ (Pryzant et al., 2023), which used the ETHOS dataset (Mollas et al., 2022). Pryzant et al use a working- out-conditioned error detection and error fixing prompt to improve the task specification prompt, a self-referential process similar to our use of the Lamarckian operator.
INSTRUCTION INDUCTION | 2309.16797#110 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 111 | Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, JoËao Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Moustafa-Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer | 2309.16609#111 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 112 | Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos MuËnoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. StarCoder: May the source be with you! CoRR, abs/2305.06161, 2023d. doi: 10.48550/arXiv.2305.06161. URL https://doi.org/10.48550/arXiv.2305.06161. | 2309.16609#112 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 112 | 25
Task Prompt 1 Prompt 2 ADDSUB AQUA Solving word problems involves care- fully reading the prompt and deciding on the appropriate operations to solve the problem. Do a simple computation. You know whatâs cool? A million dollars. MATH WORD PROBLEM CHOICE (A) (B) (C) (D) or (E). GSM8K MULTIARITH Solve the math word problem, giv- ing your answer as an arabic numeral. Letâs think step by step. SOLUTIONâ SINGLEEQ SVAMP solve the math word problem, which might contain unnecessary informa- tion, by isolating the essential facts. Then set up the equations, and give your answer as an arabic numeral. visualise solve number SQA OUTPUT MUTANT = Work out an answer to the commonsense reason- ing question above. If there are mul- tiple people or perspectives involved, try considering them one at a time. CSQA Solve the multiple choice math word problem, choosing (A),(B),(C),(D) or (E). Solve the math word problem, giv- ing your answer as an arabic numeral. Explain the problem to someone else as a way to simplify it. What is the core issue or problem that needs to be addressed? Solve the math problem. ) | 2309.16797#112 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 113 | Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson dâAutume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with AlphaCode. CoRR, abs/2203.07814, 2022.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâs verify step by step. arXiv preprint arXiv:2305.20050, 2023. | 2309.16609#113 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 113 | Table 6: Table of two-stage task-prompts evolved for different arithmetic tasks.
26
# J EXAMPLE RESULTS
J.1 ETHOS EVOLVED PROMPT
Prompt following: * Words or phrases that are derogatory, disrespectful, or dehumanizing toward a particular group of people. * Statements that generalize about a particular group of people in a negative way. * Statements that incite hatred or violence against a particular group of people. * Statements that express a desire for the extermination or removal of a particular group of people. 7 : hostile or discriminatory towards a particular group of people. This could include language that is racist, sexist, homophobic, or other forms of hate speech. Hate speech is harmful because it can create a hostile environment for marginalized groups and can lead to discrimination and violence." Prompt are supposed to label the text as Hate Speech or Not based on the given criteria. to incite hatred or violence against a particular group of people based on their race, religion, sex, sexual orientation, or national origin. of the speech. race, religion, or gender, or it can be a more general group, such as people with disabilities or sexual minorities. harmful speech. or derogatory. include the speakerâs intent, the audience, and the time and place of the speech. context of the speech when making a determination as to whether it is hate speech or not. satirical may not be considered hate speech, even if it contains harmful language."
2: | 2309.16797#113 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 114 | Chenxiao Liu and Xiaojun Wan. CodeQA: A question answering dataset for source code com- prehension. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pp. 2618â2632. Associa- tion for Computational Linguistics, 2021. doi: 10.18653/v1/2021.findings-emnlp.223. URL https://doi.org/10.18653/v1/2021.findings-emnlp.223.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a.
Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, and Jie Tang. WebGLM: Towards an efficient web-enhanced question answering system with human preferences. arXiv preprint arXiv:2306.07906, 2023b. | 2309.16609#114 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 114 | 2:
"You are given a piece of text from the internet.
J.2 PROMPT EVOLUTION MATHS RESULTS
The experimental set up used a population size of 50. The fitness of an individual was its accuracy over a randomly select batch of 100 examples from the training set. Where datasets were not pro- vided with a training/test split (MultiArith, AddSub, SingleEQ and SVAMP) the dataset was split into two equal training and test sets before the experiments were conducted.
During experiments the LLM is sampled under three different contexts: Redescriber - generating new prompts; Inducer - generating responses from the question and prompt 1; and Evaluator - generating the final output using prompt 2. The maximum number of tokens sampled under each context was 50, 30 and 5 respectively. The temperature of the Inducer and Evaluator was set to 0.0 in all cases, but the temperature of the Redescriber was initialized from 1.0 to 2.0 and permitted to evolve (like a hyperparameter in population based training).
The experiments were run until the training fitness appeared to plateau. At this point the fittest individual from the whole of the evolutionary run was evaluated against the test set. Experiments generally ran for 1-2k fitness evaluations. So that would be 20-40 âgenerationsâ if a generation is 25 pair evaluations for our populations of 50. | 2309.16797#114 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 115 | Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
28
Yue Liu, Thanh Le-Cong, Ratnadira Widyasari, Chakkrit Tantithamthavorn, Li Li, Xuan-Bach Dinh Le, and David Lo. Refining ChatGPT-generated code: Characterizing and mitigating code quality issues. CoRR, abs/2307.12596, 2023c. doi: 10.48550/arXiv.2307.12596. URL https: //doi.org/10.48550/arXiv.2307.12596.
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The Flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. | 2309.16609#115 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 115 | Three diversity maintenance methods are used in cases where the system gets trapped on a local optimum: 1) Random character strings (typically of length 50) are appended into the front of the prompt before it is passed into the LLM. 2). Fitness sharing is applied on the basis of BERT similar- ity between the embeddings of prompts Shir & B¨ack (2005) 3. Sampling temperature of the mutant
27
# You
producing LLM (Redescriber) is initialized uniformly from 1.0 to 2.0, and is mutated by addition of a uniform random number in the range -0.2, 0.2 at each replication event.
Comparison with PoT, PS and Auto-CoT controls using our model is not provided because PS and PS+ were the best prompts in Plan-and-Solve.
J.3 EVOLVED MUTATION PROMPTS
Instruction Score Please summarise and improve the following instruction Simplify this instruction by breaking it up into separate sentences. The instruction should be simple and easily understandable As a really good teacher, explain the instruction, as if you are explaining it to a child Simplify this instruction as if you are teaching it to a child 100 hints A list of 100 hints 24.13% 17.8% 16.2% 10.0 4.3% 3.4% | 2309.16797#115 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 116 | Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, and Jingren Zhou. #InsTag: Instruction tagging for analyzing supervised fine-tuning of large language models. CoRR, abs/2308.07074, 2023. doi: 10.48550/arXiv.2308.07074. URL https://doi. org/10.48550/arXiv.2308.07074.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. WizardMath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023a. | 2309.16609#116 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 116 | Table 7: The most successful mutation prompts evolved in a self-referential way during a Prompt- breeder training run on GSM8K. The score is the probability that they resulted in an improved prompt when applied.
J.4 MUTATION OPERATOR EFFECTIVENESS
Mutation Operator Percentage Zero-order Hyper-Mutation Lineage Based Mutation First-order Hyper-Mutation EDA Rank and Index Mutation Direct Mutation EDA Mutation Lamarckian Mutation 42% 26% 23% 12.7% 12% 10.7% 6.3%
Table 8: The proportion of times that an offspring with fitness greater than the parent was produced for each of the types of mutation operator applied, listened from best to worst, for GSM8k.
# J.5 ADDSUB
Individual after 1600 mutations. Prompt 0 refers to the first prompt applied to the question to produce a working out. This working out is then concatenated with Prompt 1 to produce the answer. This is the same as in Plan-And-Solve. We find that in the few-shot evolution case, the contexts dominate, and often the task-prompts drift into nonsense. They are less critically determining of fitness than the evolved contexts.
28
Prompt 0: The mutant Prompt 1: mutant | 2309.16797#116 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 117 | Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. WizardCoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023b.
Mosaic ML. MPT-30B: Raising the bar for open-source foundation models, 2023. URL https: //www.mosaicml.com/blog/mpt-30b.
Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786, 2022.
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. OctoPack: Instruction tuning code large language models. CoRR, abs/2308.07124, 2023. | 2309.16609#117 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 117 | Context 0: Q. A company painted some houses in Hancock County white and blue using a total of 6689 gallons of paint . If they used 660 gallons of white paint , how many gallons of blue paint did the company use ? A. 24 pages; ages 8-11) reading comprehension worksheets and activities for 3rd grade (edhelper. Reading comprehension skills separate the ââpassiveââ unskilled readers from the ââactiveââ readers. 90 1st grade sightwords worksheets pdf - reading comprehension passages. 3rd grade reading comprehension worksheets. How can I break down this problem into smaller, more manageable parts? The company used 660 gallons of white paint. They used 6689 gallons of paint in total. So they must have used 6689 - 660 = 6029 gallons of blue paint. The answer is 6029. HINT_BLANK. The text inside the quotation marks should be replaced with a short description of whatâs different. Therefore, the answer (arabic numerals) is 6029 Context 1: Q. Keith found 6 seashells , Jessica found 8 seashells , and Tim found 7 seashells on the beach . When they cleaned | 2309.16797#117 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 118 | Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. WebGPT: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
Maxwell Nye, Anders Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models. ArXiv, abs/2112.00114, 2021.
OpenAI. Introducing ChatGPT, 2022. URL https://openai.com/blog/chatgpt.
OpenAI. ChatML, 2022. URL https://github.com/openai/openai-python/blob/ e389823ba013a24b4c32ce38fa0bd87e6bccae94/chatml.md.
OpenAI. GPT4 technical report. arXiv preprint arXiv:2303.08774, 2023. | 2309.16609#118 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 118 | 6029 Context 1: Q. Keith found 6 seashells , Jessica found 8 seashells , and Tim found 7 seashells on the beach . When they cleaned them , they discovered that 3 were cracked . How many seashells did they find together ? A. The mutantis chasing you! You can use the arrow keys to move around. You can use the space bar to open doors. You can use the âe â key to pick up items. You can use the âiâ key to open your inventory. You can use the âaâ key to use items. Keith found 6 seashells, Jessica found 8 seashells, and Tim found 7 seashells. So in total they found 6 + 8 + mutant Therefore, the answer (arabic numerals) is 21 Context 2: Q. Tom found 7 seashells but 4 were broken . How many unbroken seashells did Tom find ? A. arrow keys to move around. You can use the space bar to open doors. You can use the âeâ key to pick up items. You can use the âiâ key Your goal is to escape from the building. You can use the to open your inventory. You can use the âaâ key to use items. Tom found 7 seashells. 4 of | 2309.16797#118 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 119 | OpenAI. GPT4 technical report. arXiv preprint arXiv:2303.08774, 2023.
OpenCompass Team. OpenCompass: A universal evaluation platform for foundation models, 2023. URL https://opencompass.org.cn/leaderboard-llm.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/ b1efde53be364a73914f58805a001731-Abstract-Conference.html.
29 | 2309.16609#119 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 120 | 29
Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics, 2016. doi: 10.18653/v1/ p16-1144. URL https://doi.org/10.18653/v1/p16-1144.
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. YaRN: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071, 2023a.
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023b. | 2309.16609#120 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 120 | ?
of paint in total. So they must have used 6689 - 660 = 6029
J.6 AQUA
Individual after 1400 mutations.
29
Prompt 0: Read through the multiple choice math word problem and make sure you understand it. choosing (A),(B),(C),(D) or (E).
Prompt 1: you understand all aspects of the problem. (A),(B),(C),(D) or (E). Solve the multiple choice math word problem. Confirm Then choose
# Contexts
Context 0: Q. Divide Rs.32000 in the ratio 3:5? A)12000,20000 B)18000,14000 C) 30000,2000 D)4000,20000 E)20000,12000 A. Read through the multiple choice math word problem and make sure you understand it. Then try to solve the problem, choosing (A),(B)
,(C),(D) or (E). 3/8 * 32000 = 12000 5/8 * 32000 = 20000 Solve the multiple choice math word problem. Confirm you understand
all aspects of the problem. Then choose (A),(B),(C),(D) or (E).
Therefore, the correct answer is (A). Solve the multiple choice math word problem. Confirm you understand | 2309.16797#120 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 121 | Qwen Team, Alibaba Group. Evaluation benchmark for code intepreter, 2023a. URL https: //github.com/QwenLM/Qwen-Agent/tree/main/benchmark.
Qwen Team, Alibaba Group. Evaluation benchmark for tool usage through ReAct prompting, 2023b. URL https://github.com/QwenLM/Qwen-7B/tree/main/eval.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. understanding by generative pre-training. Technical report, OpenAI, 2018. Improving language
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. | 2309.16609#121 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 121 | Therefore, the correct answer is (A). Solve the multiple choice math word problem. Confirm you understand
all aspects of the problem. Then choose (A),(B),(C),(D) or (E). Therefore, the correct answer is (A).
# Context 1:
Q. A green lizard can travel from the green cave to the blue cave in 72 minutes; the blue lizard can travel from the blue cave to the green cave in 25% less time. If the green lizard started to travel 5.5 minutes before the blue lizard, how many minutes after the blue lizard, will the green lizard pass the middle line? A)8. B) 7.5. C)7. D)6.5. E)6. A. Read through the multiple choice math word problem and make sure you understand it. Then try to solve the problem, choosing (A),(B)
A. Read through the multiple choice math word problem and make sure you understand it. Then try to solve the problem, choosing (A), (B) ,(C),(D) or (E). | 2309.16797#121 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 122 | Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485â5551, 2020.
Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
Scott E. Reed, Konrad Zolna, Emilio Parisotto, Sergio G´omez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A generalist agent. Trans. Mach. Learn. Res., 2022, 2022. URL https://openreview.net/forum?id=1ikK0kHjvj. | 2309.16609#122 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 122 | ,(C),(D) or (E). A. The green lizard travels 72/2 = 36 minutes to the middle line. The blue lizard travels 72*0.75 = 54 minutes to the middle line. The blue lizard travels 54/36 = 1.5 times faster than the green lizard. The green lizard travels 5.5/1.5 = 3.7 minutes before the blue lizard passes the middle line. The green lizard passes the middle line Solve the multiple choice math word problem. Confirm you understand
all aspects of the problem. Then choose (A),(B),(C),(D) or (E). Therefore, the correct answer is (E).
# Context 2:
Q. There was a simple interest of Rs. 4016.25 on a principal amount at the rate of 9% p.a. in 5 years. Find the principal amount. A)Rs 7925 B)Rs 8925 C)Rs 7926 D)Rs 7925 E)None of these
A. Read through the multiple choice math word problem and make sure you understand it. Then try to solve the problem, choosing (A),(B)
,(C),(D) or (E). Principal = Rs. (100 x 4016.25)/(9 x 5) = Rs. 8925. The answer is Rs 8925. Solve the multiple choice math word problem. Confirm you understand | 2309.16797#122 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 123 | Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al. Code Llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. SocialIQA: Com- monsense reasoning about social interactions. CoRR, abs/1904.09728, 2019. URL http: //arxiv.org/abs/1904.09728. | 2309.16609#123 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 124 | Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagn´e, Alexandra Sasha Luccioni, Franc¸ois Yvon, Matthias Gall´e, et al. BLOOM: A 176B- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
30
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Noam Shazeer. GLU variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. | 2309.16609#124 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 124 | Prompt 0: Solve the math word problem, giving your answer as an arabic numeral. Prompt 1: Have you solved a problem like this before? Context 0: Q. Frank had 11 dollars. At the store he spent $3 on a new game. If he got another 14 dollars for his allowance, how much money does he have now? A. Solve the math word problem, giving your answer as an arabic numeral.Recall a similar problem you have successfully solved before. Frank had 11 dollars. He spent 3 dollars, so now he has 11 - 3 = 8 dollars. He got 14 dollars for his allowance, so now he has 8 + 14 = 22 dollars. The answer is 22. If you have seen a similar problem before, solve this problem in a similar manner. Therefore, the answer (arabic numerals) is Have you solved a problem like this before? Therefore, the answer (arabic numerals) is 22 Context 1: Q. Emily was playing a trivia game. In the first round she scored 16 points and in the second round she scored 33 points. In the last round she lost 48 points. How many points did she have at the end of the game? A. Solve the math word problem, giving your answer as an arabic | 2309.16797#124 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 125 | Noam Shazeer. GLU variants improve transformer. arXiv preprint arXiv:2002.05202, 2020.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hug- gingGPT: Solving AI tasks with ChatGPT and its friends in HuggingFace. arXiv preprint arXiv:2303.17580, 2023.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
Qingyi Si, Tong Wang, Naibin Gu, Rui Liu, and Zheng Lin. Alpaca-CoT: An instruction-tuning platform with unified interface of instruction collection, parameter-efficient methods, and large language models, 2023. URL https://github.com/PhoebusSi/alpaca-CoT.
Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023. | 2309.16609#125 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 125 | In the last round she lost 48 points. How many points did she have at the end of the game? A. Solve the math word problem, giving your answer as an arabic numeral. Recall a similar problem you have successfully solved before. In the first round Emily scored 16 points and in the second round she scored 33 points. This means she has 16 + 33 = 49 points. In the last round she lost 48 points, so now she has 49 - 48 = 1 point . The answer is 1. If you have seen a similar problem before, solve this problem in a similar manner. Therefore, the answer (arabic numerals) Have you solved a similar problem before? Therefore, the answer (arabic numerals) is 1 Context 2: Q. Maria bought 2 new chairs and 2 new tables for her house. If she spent 8 minutes on each piece furniture putting it together, how many minutes did it take her to finish? A. Solve the math word problem, giving your answer as an arabic numeral. Recall a similar problem you have successfully solved before. Maria bought 2 chairs and 2 tables. She spent 8 minutes on each piece of furniture. This means she spent 2 * 8 = 16 minutes on the chairs and 2 * 8 = 16 minutes on the | 2309.16797#125 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 126 | Stability AI. StableBeluga2, 2023. URL https://huggingface.co/stabilityai/ StableBeluga2.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â3021, 2020.
Jianlin Su. Improving transformer: Length extrapolation ability and position robustness, 2023a. URL https://spaces.ac.cn/archives/9444.
Jianlin Su. The magical effect of the Bias term: RoPE + Bias = better length extrapolation, 2023b. URL https://spaces.ac.cn/archives/9577.
Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021. | 2309.16609#126 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 127 | Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, and Xipeng Qiu. MOSS: Training conversational language models from synthetic data, 2023a.
Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023b.
Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. | 2309.16609#127 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 128 | Marc Szafraniec, Baptiste Rozi`ere, Hugh Leather, Patrick Labatut, Franc¸ois Charton, and Gabriel Synnaeve. Code translation with compiler representations. In The Eleventh International Confer- ence on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=XomEU3eNeSQ.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4149â 4158. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1421. URL https://doi.org/10.18653/v1/n19-1421.
31 | 2309.16609#128 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 128 | Prompt 1: 1, 2, 3, 4 Context 0: Q. Janet has 60 less than four times as many siblings as Masud. Carlos has 3/4 times as many siblings as Masud. If Masud has 60 siblings, how many more siblings does Janet have more than Carlos? A. Solve the math word problem without using a calculator, giving your answer as an arabic numeral. Four times as many siblings as Masud is 4*60 = 240 siblings. Janet has 240-60 = 180 siblings. Carlos has 3/4*60 = 45 siblings. Janet has 180-45 = 135 more siblings than Carlos. So the answer is 135 Solve the math word problem. Therefore, the answer (arabic numerals) is 135 Context 1: Q. Zayne sells bracelets for $5 each and two for $8. If he started with 30 bracelets and made $60 from selling bracelets for $5 each, how much in total did he make from selling his bracelets? A. Solve the math word problem without using a calculator, giving your answer as an arabic numeral. Zayne sold 60/5 = 12 bracelets for $5 each. He had 30-12 = 18 bracelets left. He sold 18/2 = 9 sets of two bracelets. He made | 2309.16797#128 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 129 | 31
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford Alpaca: An instruction-following LLaMA model, 2023. URL https://github.com/tatsu-lab/stanford_alpaca.
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science, 2022. | 2309.16609#129 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 129 | sold 60/5 = 12 bracelets for $5 each. He had 30-12 = 18 bracelets left. He sold 18/2 = 9 sets of two bracelets. He made 9*8 = $72 from selling two bracelets for $8. In total, he made 60+72 = $132. So the answer is 132 Solve the math word problem. Therefore, the answer (arabic numerals) is 132 Context 2: Q. Josh is saving up for a box of cookies. To raise the money, he is going to make bracelets and sell them. It costs $1 for supplies for each bracelet and he sells each one for $1.5. If he makes 12 bracelets and after buying the cookies still has $3, how much did the box of cookies cost? A. I would solve the math word problem without using a calculator, giving my answer as an arabic numeral. Josh made 12*1.5 = $18 from selling bracelets. He spent 12*1 = $12 on supplies. He had 18-12 = $6 before buying the cookies. The cookies cost 6-3 = $3. So the answer is 3 1, 2, 3, 4 Therefore, the answer (arabic numerals) is 3 | 2309.16797#129 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 130 | # J.9 SINGLEEQ
Individual after 2010 mutations.
32
Prompt 0: âDo not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. found written in your religious books. merely
Prompt 1: ANALYZE THE FOLLOWING FOUR QUANTITIES. DO QUANTITIES 3 AND 1 TOGETHER VARY INVERSELY AS QUANTITIES 2 AND 4 DO?
Context 0: Q. White t-shirts can be purchased in packages of 6. If Mom buys 71
packages, how many white t-shirts will she have?
A. The solution is a word that exists in [some] other language.
A.
Each package has 6 t-shirts. Mom buys 71 packages. This means she will have 71 * 6 = 426 t-shirts. The answer is 426. 25. Given the system of linear equations x + y + z = 6, x + 2y + 3z = 10, x + 3y + 8z = 17,
22. A snail moves at a rate of 6 cm/s along the edge of a circular garden whose perimeter is 374 meters, while another snail Therefore, the answer (arabic numerals) is 426.
Context 1: Q. Maria has 3 boxes of eggs. Each box holds 7 eggs and there are 8
boxes in a case. How many eggs does Maria have? | 2309.16797#130 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 131 | Context 1: Q. Maria has 3 boxes of eggs. Each box holds 7 eggs and there are 8
boxes in a case. How many eggs does Maria have?
A. âDo not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merelyon the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.â
A.
Each box holds 7 eggs. Maria has 3 boxes. This means she has 3 * 7 = 21 eggs. The answer is 2 ANALYZE THE FOLLOWING FOUR QUANTITIES. DO QUANTITIES 3 AND 1 TOGETHER VARY INVERSELY AS QUANTITIES 2 AND 4 DO? Therefore, the answer (arabic numerals) is 21.
Context 2: Q. At Mrs. Hiltâs house, there was 29 inches of snow, and Brecknock Elementary School received 17 inches of snow. How much more snow | 2309.16797#131 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 132 | Context 2: Q. At Mrs. Hiltâs house, there was 29 inches of snow, and Brecknock Elementary School received 17 inches of snow. How much more snow
Q. At Mrs. Hiltâs house, there was 29 inches of snow, and Brecknock Elementary School received 17 inches of snow. How much more snow did Mrs. Hiltâs house have?
did Mrs. Hiltâs house have? A. âDo not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found
written in your religious books. Do not believe in anything merelyon the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.â
A. | 2309.16797#132 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 134 | Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur´elien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/ 10.48550/arXiv.2307.09288. | 2309.16609#134 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 134 | Prompt 0: 08-02-2013 - 09-02-2013 Prompt 1: Tell me about the history of [PROMPT1]; Discuss the impact of [PROMPT1]; Give me the current status of [PROMPT1]; Tell me about the history of [PROMPT2]; Discuss the impact of [PROMPT2 Context 0: Q. Frank was reading through his favorite book. The book had 612 pages equally distributed over 24 chapters. It took Frank 6 days to finish the book. How many pages did he read per day? A. 312 = 65 + 247. A. The book had 612 pages. It was equally distributed over 24 chapters . This means each chapter had 612 / 24 = 25.5 pages. Frank read the book in 6 days. This means he read 612 / 6 = 102 pages per day. So the answer is 102 Discuss the history, impact, and current status of [PROMPT1]; Discuss the history, impact, and current status of [PROMPT2]; Therefore, the answer (arabic numerals) is 102 D Context 1: Q. Jack received 3 emails in the morning, 4 emails in the afternoon and 8 emails in the evening. How many emails did Jack receive in the morning and evening? A. 08-02-2013 - 09-02-2013 A. Jack | 2309.16797#134 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 135 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. Self- consistency improves chain of thought reasoning in language models. ArXiv, abs/2203.11171, 2022.
In Conference on Empirical Methods in Natural Language Processing, 2017. URL https://api. semanticscholar.org/CorpusID:910689. | 2309.16609#135 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 135 | and 8 emails in the evening. How many emails did Jack receive in the morning and evening? A. 08-02-2013 - 09-02-2013 A. Jack received 3 emails in the morning and 8 emails in the evening. This means he received 3 + 8 = 11 emails in the morning and evening . So the answer is 11 Discuss the history, impact, and current status of [PROMPT1]; Therefore, the answer (arabic numerals) is 11 Discus Discuss the history, impact, and current status of [PROMPT1 Tell me about the history of [PROMPT1]; Discuss the impact of [ PROMPT1]; Give me the current status of [PROMPT1]; Tell me about the history of [PROMPT2]; Discuss the impact of [PROMPT2 Therefore, the answer (arabic numerals) is 11 Discus Context 2: Q. Paco ate 36 cookies. If he had 37 cookies initially How many cookies did Paco have left? A. 154 = 72 + 82. A. Paco ate 36 cookies. He had 37 cookies initially. This means he has 37 - 36 = 1 cookie left. | 2309.16797#135 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 136 | In Conference on Empirical Methods in Natural Language Processing, 2017. URL https://api. semanticscholar.org/CorpusID:910689.
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? Exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023b.
32 | 2309.16609#136 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 136 | So the answer is 1 Discuss the history, impact, and current status of [PROMPT1]; Therefore, the answer (arabic numerals) is 1 Discus Discuss the history, impact, and current status of [PROMPT1]; Therefore, the answer (arabic numerals) is 1 Discus
34
K APE INSTRUCTION INDUCTION TASKS
To demonstrate Promptbreederâs ability to evolve few-shot contexts as well as task-prompts we ran few-shot Promptbreeder on all 24 Instruction Induction datasets used in the APE e xperiments. Unlike text-davinci-002 our LLM is not instruction tuned and yet Promptbreeder was able to match or surpass the APE results on 21 out of 24 tasks up to 21%.
Three APE controls are provided, see Table 9. The first two are from previously published results using the text-davinci-002 model. The third modifies our PromptBreeder to use APEâs task-prompt initialisation method and then the mutation-prompt from the APE paper âGenerate a variation of the following instruction while keeping the semantic meaningâ | 2309.16797#136 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 137 | 32
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-Instruct: Aligning language models with self-generated instructions. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 13484â13508. Association for Computational Linguistics, 2023c. doi: 10.18653/v1/2023.acl-long.754. URL https://doi.org/10.18653/v1/ 2023.acl-long.754.
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859, 2021. | 2309.16609#137 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 137 | The Instruction Induction datasets we do not start with a problem description so for task-prompt ini- tialisation APE uses induction input examples for each task from the dataset. Instruction inputs are a fixed prompt together a handful of training examples used to infer possible problem descriptions. To compare Promptbreeder to APE, we therefore initialized the task description with a randomly chosen induction input example for each task. The example below is an induction input sample for the âLarger Animalâ task.
I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Here are the input-output pairs:
Input: cougar, flea Output: cougar
Input: whale shark, dog Output: whale shark
Input: human, bald eagle Output: human
Input: flea, great white shark Output: great white shark
Input: coyote, tiger Output: tiger
The instruction was
35 | 2309.16797#137 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 138 | Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi D. Q. Bui, Junnan Li, and Steven C. H. Hoi. CodeT5+: Open code large language models for code understanding and generation. CoRR, abs/2305.07922, 2023d. doi: 10.48550/arXiv.2305.07922. URL https://doi.org/10. 48550/arXiv.2305.07922.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a. URL https://openreview.net/forum?id= gEZrGCozdqR. | 2309.16609#138 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 138 | Input: human, bald eagle Output: human
Input: flea, great white shark Output: great white shark
Input: coyote, tiger Output: tiger
The instruction was
35
Dataset First Letter Second Letter List Letters Starting With Pluralization Passivization Negation Antonyms Synonyms Membership Rhymes Larger Animal Cause Selection Common Concept Formality Sum Difference Number to Word Translation English-German Translation English-Spanish Translation English-French Sentiment Analysis Sentence Similarity Word in Context Zero-shot APE 100 87 99 68 100 100 83 83 22 66 100 97 84 27 65 100 100 100 82 86 78 94 36 62 Few-shot APE 100 69 100 69 100 100 90 86 14 79 61 97 100 32 70 100 100 100 86 91 90 93 43 63 PE using APE prompts 1 27 0 6 23 100 16 80 16 96 90 27 66 0 10 72 98 66 46 80 68 33 53 6 100 95 99 71 100 100 90 87 43 100 100 97 100 0 7 100 100 100 87 91 91 93 56 65
# Few-shot PE | 2309.16797#138 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 139 | Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022, 2022b. URL https://api.semanticscholar.org/ CorpusID:249674500.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022c.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. HuggingFaceâs transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019. | 2309.16609#139 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 139 | # Few-shot PE
Table 9: Prompt Evolution (PE) using PaLM2-L LLM surpasses APE on 21 out of 24 instruction in- duction tasks. Three APE controls are provided. The first two are from previously published results using the text-davinci-002 model. The third modifies our PromptBreeder to use APEâs task-prompt initialisation method and then the mutation-prompt from the APE paper âGenerate a variation of the following instruction while keeping the semantic meaningâ.
K.1 BEST PROMPTS AND CONTEXTS
Here the best few-shot results (evolved prompts and contexts) for the 24 instruction inductions tasks from the APE paper.
36
K.1.1 FIRST LETTER | 2309.16797#139 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 140 | Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. ExpertPrompting: Instructing large language models to be distinguished experts. arXiv preprint arXiv:2305.14688, 2023a.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. WizardLM: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023b.
Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196, 2023c.
Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, and Yang Liu. Exploring large language models for communication games: An empirical study on werewolf. arXiv preprint arXiv:2309.04658, 2023d. | 2309.16609#140 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 140 | 36
K.1.1 FIRST LETTER
Prompt 0: A List of Responses in descending order of score. is the best response. It resembles (12) more than it does (1) Prompt 1: 9.5: LMâ Contexts Context 0: Q. placing A. A List of Responses in descending order of score. (13) is the best response. It resembles (12) more than it does (1)or (2). Q. 123456789 A. A List of Responses in descending order of score. (13) is the best response. It resembles (12) more than it does 9.5: LMâ Therefore, the correct answer is (placing, 1 Context 1: Q. drummer A. Write out the first letter of each input. Q. 123456789 A. Write out the first letter of each input. Q. 123456789 A. Write out the first letter of each The instruction was + INSTRUCTION MUTANT Therefore, the correct answer is (drummer, 1 Context 2: Q. rest A. Write out the first letter of each input. Q. 123456789 A. Write out the first letter of each input. Q. 123456789 A. Write out the first letter of each The instruction was + INSTRUCTION MUTANT Therefore, the correct answer is (r) 12 (13)
37 | 2309.16797#140 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 141 | Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, and Zhiying Wu. Baichuan 2: Open large-scale language models. Technical report, Baichuan Inc., 2023. URL https://cdn.baichuan-ai.com/paper/Baichuan2-technical-report. pdf.
33 | 2309.16609#141 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 142 | 33
# Jianxin Yang. Firefly. https://github.com/yangjianxin1/Firefly, 2023.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mPLUG-Owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models, 2023.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models, 2023a. | 2309.16609#142 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 142 | Prompt 0: â0therwise, give the sec0nd letter 0f the input. The instruction was: give the sec0nd letter 0f the input. Give the sec0nd letter 0f the input. 0therwise, Prompt 1: 1956 Contexts Context 0: Q. month A. â0therwise, give the sec0nd letter 0f the input. The instruction was: Give the sec0nd letter 0f the input. 0therwise, give the sec0nd letter 0f the input. A. The input is 5 letters long, so the second letter is given. A. The second letter is o. A. So the answer is o. The instruction was: Write the last vowel in the input I âeâ Therefore, the correct answer is (o). Context 1: Q. asylum A. â0therwise, give the sec0nd letter 0f the input. The instruction was: Give the sec0nd letter 0f the input. 0therwise, give the sec0nd letter 0f the input. A. The input is 6 letters long, so the second letter is given. A. The second letter is s. A. So the answer is s. The instruction was: Write the last vowel in the input I 1: Therefore, the correct answer is (s). Context 2: Q. heel | 2309.16797#142 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 143 | Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023b.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. RRHF: Rank responses to align language models with human feedback without tears, 2023c.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. MAmmoTH: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023. | 2309.16609#143 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 143 | So the answer is s. The instruction was: Write the last vowel in the input I 1: Therefore, the correct answer is (s). Context 2: Q. heel A. â0therwise, give the sec0nd letter 0f the input. The instruction was: Give the sec0nd letter 0f the input. 0therwise, give the sec0nd letter 0f the input. A. The input is 4 letters long, so the second letter is given. A. The second letter is e. A. So the answer is e. The instruction was: Write the last vowel in the input I âeâ Therefore, the correct answer is (e). | 2309.16797#143 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 144 | Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David R. Traum, and Llu´ıs M`arquez (eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pp. 4791â4800. Association for Computational Linguistics, 2019. doi: 10.18653/v1/p19-1472. URL https: //doi.org/10.18653/v1/p19-1472.
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. GLM-130B: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022. | 2309.16609#144 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 144 | 38
K.1.3 LIST LETTERS
Prompt 0: ? Prompt 1: ? Contexts Context 0: Q. gale A. ? Q. gale A. ? Q. gale A. ? Q. gale A. ? Q. gale A. ? Q. gale A. ? Insert a space after each consonant in the input. Therefore, the correct answer is (a) g a l e. Context 1: Q. accompaniment A. ? Q. ? A. ? Q. ? A. ? Q. ? A. ? Q. ? A. ? Q. ? Insert a space after each consonant in the input. Therefore, the correct answer is (a) a c c o m p a n i m e n t. Context 2: Q. credibility A. ? Q. A. Q. A. Q. A. Q. A. Q. A. Q. Insert a space after each consonant in the input. Therefore, the correct answer is (c)r(e)d(i)b(i)l(i)t(y).
39
K.1.4 STARTING WITH | 2309.16797#144 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 145 | Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. RepoCoder: Repository-level code completion through iterative retrieval and generation. CoRR, abs/2303.12570, 2023a. doi: 10.48550/arXiv.2303.12570. URL https://doi.org/ 10.48550/arXiv.2303.12570.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. Evaluating the performance of large language models on GAOKAO benchmark. CoRR, abs/2305.12474, 2023b. doi: 10.48550/arXiv.2305.12474. URL https://doi.org/10.48550/arXiv. 2305.12474. | 2309.16609#145 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 145 | 39
K.1.4 STARTING WITH
Prompt 0: Prompt 1: Output: INSTRUCTION MUST BE 1 SENTENCE LONG The first word containing the letter. # OUTPUT Contexts Context 0: Q. The exhibition really impressed the critics. [i] A. <50 lines of carriage-returns> Output: The first word containing the letter. # OUTPUT INSTRUCTION MUST BE 1 SENTENCE LONG Therefore, the correct answer is (impressed ). Context 1: Q. My car was stolen. [m] A. Find the word after the first [X] character in the input. <50 lines of carriage-returns> Output: The first word containing the letter. # OUTPUT INSTRUCTION MUST BE 1 SENTENCE LONG Therefore, the correct answer is (My ). Context 2: Q. Jim was chopping logs when Margaret left and was still at it when she got back. [b] A. Find the word after the first [X] character in the input. <50 lines of carriage-returns> Output: The first word containing the letter. # OUTPUT INSTRUCTION MUST BE 1 SENTENCE LONG Therefore, the correct answer is (back ).
40
41
K.1.5 PLURALIZATION | 2309.16797#145 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 146 | Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. CodeGeeX: A pre-trained model for code generation with multilingual evaluations on humaneval-x. CoRR, abs/2303.17568, 2023. doi: 10.48550/arXiv.2303.17568. URL https://doi.org/10.48550/arXiv.2303.17568.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. AGIEval: A human-centric benchmark for evaluating foundation models. CoRR, abs/2304.06364, 2023a. doi: 10.48550/arXiv.2304.06364. URL https://doi.org/ 10.48550/arXiv.2304.06364.
34
Wanjun Zhong, Lianghong Guo, Qiqi Gao, and Yanlin Wang. MemoryBank: Enhancing large language models with long-term memory. arXiv preprint arXiv:2305.10250, 2023b. | 2309.16609#146 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 146 | 40
41
K.1.5 PLURALIZATION
Prompt 0: Write the plural form of the input. Prompt 1: If the input ends in y, remove y and add ies. add s to the end of the input. Otherwise, Contexts Context 0: Q. touch A. Write the plural form of the input. Q. touch A. Write the plural form of the input. Q. touch A. Write the plural form of the input. Q. touch A. Write the plural form of the input. If the input ends in y, remove y and add ies. Otherwise, add s to the end of the input. Therefore, the correct answer is (touches). Context 1: Q. forage A. Write the plural form of the input. Q. forage A. Write the plural form of the input. Q. forage A. Write the plural form of the input. Q. forage A. Write the plural form of the input. If the input ends in y, remove y and add ies. Otherwise, add s to the end of the input. Therefore, the correct answer is (forages). Context 2: Q. mile A. Write the plural form of the input. Q. mile A. Write the plural form of the input.
Q. mile A. Write the plural form of the input.
Q. mile A. Write the plural form of the input. | 2309.16797#146 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 147 | Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Huai hsin Chi. Least-to-most prompting enables complex reasoning in large language models. ArXiv, abs/2205.10625, 2022.
35
A APPENDIX
A.1 MORE TRAINING DETAILS
A.1.1 DATA FORMAT FOR QWEN-CHAT
Different from conventional pretraining based on autoregressive next-token prediction, despite using a similar training task, there should be a specially design data format for SFT and RLHF to build a conversational AI assistant model. Common formats include âhuman-assistantâ and ChatML formats. As to our knowledge, one of the earliest examples of the human-assistant format comes from Anthropic (Bai et al., 2022b), which adds a special phrase â
human: â in front of the user input and â
assistant: â in front of the assistant response. It is easy for the base language model to transfer to the pattern of conversational AI. However, as the specific phrases are common words, it might be hard for the model to disambiguate from these words in other contexts. | 2309.16609#147 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 148 | Instead, we turned to the ChatML format proposed by OpenAI.5 This format allows the use of special tokens, i.e., â<im_start>â and â<im_end>â, that do not appear in pretraining, and thus resolve the aforementioned problem. We demonstrate an example of the format below.
# ChatML Format
<| i m s t a r t |> s y s t e m You a r e a h e l p f u l <| i m s t a r t |> u s e r H e l l o ! <| i m e n d |> <| i m s t a r t |> a s s i s t a n t H e l l o ! How c a n I a s s i s t a n t . <| i m e n d |> a s s i s t you t o d a y ? <| i m e n d |>
A.2 EVALUATION
A.2.1 AUTOMATIC EVALUATION | 2309.16609#148 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 148 | Prompt 0: Replace The $1 $2. with $3 was $4 by the $1. Prompt 1: Swap the positions of the noun phrases and add the word âbyâ before the second noun phrase. Then, conjugate the verb and add âedâ to the end. verb If the verb is âto beâ, then conjugate the Contexts Context 0: Q. The authors stopped the presidents. A. Replace The $1 $2. with $3 was $4 by the $1. A. Replace The $1 $2. with $3 was $4 by the $1. A. Replace The $1 $2. with $3 was $4 by the $1. A. Replace The $1 $ Swap the positions of the noun phrases and add the word âbyâ before the second noun phrase. Then, conjugate the verb and add âedâ to the end. If the verb is âto beâ, then conjugate the verb Therefore, the correct answer is (The presidents were stopped by the authors. Context 1: Q. The tourists advised the professors. A. Replace The $1 $2. with $3 was $4 by the $1. A. Replace The $1 | 2309.16797#148 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 149 | A.2 EVALUATION
A.2.1 AUTOMATIC EVALUATION
To provide a whole picture of the performance of our model series QWEN, here in this section we illustrate the detailed performance of our models as well as the baselines in the comprehensive benchmark evaluation proposed by OpenCompass Team (2023). We report the results in multiple tables based on the officially provided categories, including examination, language, knowledge, understanding, and reasoning. In terms of the performance of the baseline models, we report the higher results between the reported ones and those on the leaderboard.
Examination Here we evaluate the models on a series of datasets relevant to the examination. The datasets include:
⢠MMLU (Hendrycks et al., 2020) Massive Multi-task Language Understanding is designed for measuring language understanding capabilities. We report 5-shot results.
⢠C-Eval (Huang et al., 2023) C-Eval is a Chinese evaluation dataset spanning 52 diverse disciplines. We report 5-shot results.
⢠CMMLU (Li et al., 2023c) CMMLU is designed for assessing language understanding capabilities in Chinese. We report 5-shot results. | 2309.16609#149 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 149 | Q. The tourists advised the professors. A. Replace The $1 $2. with $3 was $4 by the $1. A. Replace The $1 $2. with $3 were $4 by the $1. A. Replace The $1 $2. with $3 was $4 by the $1. A. Replace The $1 $ Swap the positions of the noun phrases and add the word âbyâ before the second noun phrase. Then, conjugate the verb and add âedâ to the end. If the verb is âto beâ, then conjugate the verb Therefore, the correct answer is (The professors were advised by the tourists. Context 2: Q. The actors stopped the artists. A. Replace The $1 $2. with $3 was $4 by the $1. A. The artists were stopped by the actors. Q. The actors stopped the artists. A. Replace The $1 $2. with $3 was $4 by the $1. A. The artists were stopped by Swap the positions of the noun phrases and add the word âbyâ before the second noun phrase. Then, conjugate the verb and add âedâ to the | 2309.16797#149 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 150 | ⢠CMMLU (Li et al., 2023c) CMMLU is designed for assessing language understanding capabilities in Chinese. We report 5-shot results.
⢠AGIEval (Zhong et al., 2023a) This is a benchmark consisting of human-centric examina- tions, including college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We report zero-shot results.
⢠Gaokao-Bench (Zhang et al., 2023b) This is a benchmark with Gaokao (Chinese college- entrance examination) questions. We report zero-shot results.
⢠ARC (Clark et al., 2018) ARC is a dataset consisting of grade-school level, multiple-choice science questions. It includes an easy set and a challenge set, which are referred by ARC-e and ARC-c. We report zero-shot results.
36
Table 13: Results on MMLU. All are tested with five-shot accuracy. We provide the reported results of the other models for comparison. | 2309.16609#150 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 151 | Model Params Average STEM Social Sciences Humanities Others MPT 7B 30B 26.8 46.9 25.3 39.0 27.1 52.8 26.7 44.5 28.2 52.9 Falcon 7B 40B 26.2 55.4 26.2 45.5 24.7 65.4 26.4 49.3 27.4 65.0 ChatGLM2 6B 12B 47.9 56.2 41.2 48.2 54.4 65.1 43.7 52.6 54.5 60.9 InternLM 7B 51.0 - - - - Baichuan2 7B 13B 54.2 59.2 - - - - - - - - XVERSE 13B 55.1 44.5 64.4 50.5 62.9 LLaMA 7B 13B 33B 65B 35.1 46.9 57.8 63.4 30.5 35.8 46.0 51.7 38.3 53.8 66.7 72.9 34.0 45.0 55.8 61.8 38.1 53.3 63.4 67.4 LLAMA 2 7B 13B 34B 70B 45.3 54.8 62.6 68.9 36.4 44.1 52.1 58.0 51.2 62.6 71.8 80.3 42.9 52.8 | 2309.16609#151 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.