doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.16797 | 151 | 43
K.1.7 NEGATION
Prompt 0: False Prompt 1: M Contexts Context 0: Q. The original language of The Purple Taxi is French. A. Replace a noun or noun phrase with another noun or noun phrase. B. Replace a noun or noun phrase with a pronoun. C. Replace a pronoun with a noun or noun phrase. D. Replace a pronoun with another pronoun. Q. The original language of The Purple Taxi is French. Find the negation of a sentence. Therefore, the correct answer is (The original language of The Purple Taxi is not French.). Context 1: Q. Marcel Alessandri died in Paris. A. Nounphrase B. Pronoun C. Noun phrase D. Pronoun Q. Marcel Alessandri died in Paris. Find the negation of a sentence. Therefore, the correct answer is (Marcel Alessandri did not die in Paris False Therefore, the correct answer is (Marcel Alessandri did not die in Paris.). Context 2: Q. Some people are wise. A. Replace a noun or noun phrase with another noun or noun phrase. B. Replace a noun or noun phrase with a pronoun. C. Replace a pronoun with a noun or noun phrase. D. Replace a pronoun with another pronoun. Q. Some people are wise. Find the negation of Find the negation of a sentence. Therefore, the correct answer is (Some people are not wise.). | 2309.16797#151 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 152 | 44
K.1.8 ANTONYMS
# Prompt 0:
Prompt 1: It is good to be a leader but it is more important to first be a follower.
Contexts Context 0: Q. nonpartisan A. , if possible. Input: 1 Output: 1 Input: 2 Output: 2 Input: 3 Output: 3 Input: 4 Output: 4 Input: Write about your most treasured item Therefore, the correct answer is (The answer is partisan. Context 1: Q. undignified A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Input 1: 1 Output 1: 1 Input 2: 2 Output 2: 2 Input 3: 3 Output 3: 3 Input 4 Write the wordâs antonym Therefore, the correct answer is (The answer is dignified. Context 2: Q. flattering A. reverse the + + PROMPT + PROMPT+ PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PROMPT + PRO Write the Write the wordâs antonym Therefore, the correct answer is (The answer is unflattering.
45
K.1.9 SYNONYMS | 2309.16797#152 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 153 | Prompt 0: Prompt 1: 2015 Contexts Context 0: Q. bus A. 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16 Therefore, the correct answer is (The answer is 10, âbusâ, âcoachâ, âmotorcoachâ, âmotorbusâ, Context 1: Q. electric A. 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: Convert each word to a synonym according to WordNet. If there are multiple synonyms, use the first one. Therefore, the correct answer is (The answer is 10, âelectricâ, â electricalâ, âpowerâ, âcurrentâ, Context 2: Q. frightened A. 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: Therefore, the correct answer is (The answer is 10, âfrightenedâ, | 2309.16797#153 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 154 | Model Params Avg. Avg. (Hard) STEM Social Sciences Humanities Others Proprietary models GPT-3.5 - 54.4 41.4 52.9 61.8 50.9 53.6 GPT-4 - 68.7 54.9 67.1 77.6 64.5 67.8 InternLM 123B 68.8 50.0 63.5 81.4 72.7 63.0 ChatGLM2 - 71.1 50.0 64.4 81.6 73.7 71.3 Open-source models ChatGLM2 6B 51.7 37.1 48.6 60.5 51.3 49.8 InternLM 7B 52.8 37.1 48.0 67.4 55.4 45.8 Baichuan2 7B 13B 54.0 58.1 - - - - - - - - - - XVERSE 13B 54.7 33.5 45.6 66.2 58.3 56.9 QWEN 1.8B 7B 14B 54.7 63.5 72.1 41.8 46.4 53.7 50.8 57.7 65.7 69.9 78.1 85.4 56.3 66.6 75.3 46.2 57.8 68.4 | 2309.16609#154 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 155 | In terms of MMLU, we report the detailed results in Table 13. In terms of C-Eval, we report the results in Table 14. For the rest of the datasets, we report the results in Table 15. Note that AGIEval includes
# 5https://github.com/openai/openai-python/blob/main/chatml.md
37
Table 15: Results on the other datasets of examination. Specifically, we report the results on CMMLU, AGIEval, ARC-e, and ARC-c. | 2309.16609#155 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 156 | Model Params CMMLU AGIEval Gaokao-Bench ARC-e ARC-c MPT 7B 25.9 21.3 19.8 70.2 42.6 Falcon 7B - - - 70.0 42.4 ChatGLM2 6B 49.3 39.0 46.4 73.0 61.0 InternLM 7B 20B 51.8 59.0 36.9 44.6 43.0 45.5 78.7 86.1 69.5 81.7 Baichuan2 7B 13B 57.1 62.0 42.7 48.2 47.5 54.3 54.7 61.9 32.5 38.0 LLaMA 7B 13B 33B 65B 26.8 31.5 36.0 40.6 20.6 22.0 33.5 33.9 21.3 20.4 18.9 19.1 72.8 74.8 80.0 80.6 47.6 52.7 67.5 69.5 LLAMA 2 7B 13B 70B 31.8 38.4 53.6 21.8 30.9 40.2 18.9 18.2 23.3 75.2 77.3 85.9 45.9 60.3 78.3 StableBeluga2 70B 51.8 41.6 40.9 91.2 86.1 QWEN | 2309.16609#156 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 156 | 46
K.1.10 MEMBERSHIP
Prompt 0: Put the animals in ascending order of length. Prompt 1: Contexts Context 0: Q. goat, motorway, shark, penguin, white, tractor, lion A. Put the animals in ascending order of length. The answer is goat, penguin, shark, lion. Write the animals in alphabetical order. Therefore, the correct answer is (goat, penguin, shark, lion). Write the animals in alphabetical order. Therefore, the correct Therefore, the correct answer is (goat, penguin, shark, lion). Context 1: Q. ship, swan, parrot, monkey, butter, dentist, shark A. Put the animals in ascending order of length. The answer is monkey, parrot, shark, swan. Write the animals in alphabetical order. Therefore, the correct answer is (monkey, parrot, shark, swan). Write the animals in alphabetical order. Therefore, the correct Therefore, the correct answer is (monkey, parrot, shark, swan). Context 2: Q. snail, ship, trousers, jellyfish, rabbit A. Put the animals in ascending order of length. The answer is rabbit, snail, jellyfish. Write the animals in alphabetical order. Therefore, the correct answer is (rabbit, snail, jellyfish). Write the animals in alphabetical order. Therefore, the correct answer is (rabbit | 2309.16797#156 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 158 | the parts of Chinese and English, while LLAMA 2 only reported the results in the English part, so we use the results on OpenCompass. Additionally, while CMMLU, AGIEval, and Gaokao-Bench are related to Chinese, and MPT, Falcon, and the LLaMA series were not optimized for Chinese, these models achieved low performance on the datasets.
Knowledge and Understanding Here we evaluate the models on a series of datasets relevant to knowledge and natural language understanding. The datasets include
⢠BoolQ (Clark et al., 2019) This is a QA dataset, where the questions are about passages of Wikipedia, and the model should answer yes or no to the given possible answer. We report zero-shot results.
⢠CommonsenseQA (Talmor et al., 2019) This is a dataset of multiple-choice question answering that asseses the understanding of commonsense knowledge. We report 8-shot results.
⢠NaturalQuestions (Kwiatkowski et al., 2019) It is a dataset of QA where the questions are from users and the answers are verified by experts. We report zero-shot results.
⢠LAMBADA (Paperno et al., 2016) This is dataset to evaluate language understanding by word prediction. It consists of passages related to human subjects. We report zero-shot results.
We report the results in Table 16. | 2309.16609#158 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 158 | Prompt 0: If the last letter of the input is âeâ, remove it. Prompt 1: remove the last two letters of the input and add the letters ÂmoteÂ. Contexts Context 0: Q. pea A. If the last letter of the input is âeâ, remove it. A. If the last letter of the input is âsâ, remove it. A. If the last letter of the input is âyâ, remove it. A. If the last letter of the input is remove the last two letters of the input and add the letters Â\ x93moteÂ. Therefore, the correct answer is (a) pea. Context 1: Q. night A. If the last letter of the input is âeâ, remove it. A. If the last letter of the input is âtâ, remove it. A. If the last letter of the input is âhâ, remove it. A. If the last letter of the input is remove the last two letters of the input and add the letters Â\ x93moteÂ. Therefore, the correct answer is (The answer is night. Context 2: Q. add A. If the last letter of the input is | 2309.16797#158 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 159 | We report the results in Table 16.
Reasoning We report the evaluation results on the datasets concerning reasoning, focusing on natural language reasoning. For the others, such as mathematics and coding, as we have illustrated detailed results, here we do not report those results repeatedly. The datasets for evaluation include:
⢠HellaSwag (Zellers et al., 2019) This is a commonsense natural language inference (NLI) dataset, where the questions are easy for humans but struggling for previous language models. We report zero-shot results.
⢠PIQA (Bisk et al., 2020) This is an NLI dataset assessing the physical knowledge. We report zero-shot results.
38
Table 16: Results on the datasets concerning knowledge and understanding. Specifically, we report the results on BoolQ, CommonsenseQA, NaturalQuestions, and LAMBADA. | 2309.16609#159 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 159 | x93moteÂ. Therefore, the correct answer is (The answer is night. Context 2: Q. add A. If the last letter of the input is âeâ, remove it. A. If the last letter of the input is âdâ, remove it. A. If the last letter of the input is âaâ, remove it. A. If the last letter of the input is remove the last two letters of the input and add the letters Â\ x93moteÂ. Therefore, the correct answer is (The answer is add. | 2309.16797#159 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 160 | Model Params BoolQ CommonsenseQA NaturalQuestions LAMBADA MPT 7B 75.0 61.8 11.6 70.0 Falcon ChatGLM2 7B 6B 67.5 79.0 20.8 65.4 15.7 9.7 - 54.3 InternLM 7B 20B 64.1 87.5 59.8 70.6 8.9 25.2 67.0 71.8 XVERSE 13B 64.2 62.2 0.3 48.2 Baichuan2 7B 13B 63.2 67.0 63.0 65.6 9.4 16.3 73.3 74.0 LLaMA 7B 13B 33B 65B 76.5 78.7 84.4 86.6 64.9 67.4 72.5 74.1 16.8 20.2 30.9 33.4 73.3 75.2 77.2 77.7 LLAMA 2 7B 13B 70B 77.4 82.4 87.7 66.5 67.3 78.5 19.1 24.9 34.2 73.3 76.5 78.9 StableBeluga2 70B 89.4 72.6 25.1 71.3 QWEN 1.8B 7B 14B 68.0 76.4 86.2 60.1 66.8 70.3 3.2 | 2309.16609#160 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 161 | Prompt 0: Prompt 1: 10 Contexts Context 0: Q. spider, manatee A. Hints 91 and 93 are both false. The weight of a spider is 0.003 pounds. The weight of a manatee is 1300 pounds. The manatee weighs more than the Therefore, the correct answer is (manatee). Q. manatee, spider A. Hints 91 and 93 are both false. The weight Context 1: Q. sperm whale, sheltie A. Hints 91 and 93 are both false. The weight of a sperm whale is 450000 pounds. The weight of a sheltie is 25 pounds. The sperm whale weighs more than Multiply Inputs;; Therefore, the correct answer is (sperm whale). Q. sheltie, sperm whale A. Hints 91 and 93 are both false. Context 2: Q. hummingbird, polar bear A. ââTo invent, you need a good imagination and a pile of junk.ââ- Thomas Edison The weight of a hummingbird is 0.002 pounds. The weight of a polar bear is 1500 pounds. The polar bear weighs more than the hummingbird. Therefore, the correct answer is Multiply Inputs;; Therefore, the | 2309.16797#161 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 163 | Model Params HellaSwag PIQA SIQA OCNLI MPT 7B 76.4 80.6 48.5 30.0 Falcon 7B 74.1 76.7 47.2 - ChatGLM2 6B 57.0 69.6 64.3 33.1 InternLM 7B 20B 70.6 78.1 77.9 80.3 60.5 72.8 37.5 42.5 Baichuan2 7B 13B 67.0 70.8 76.2 78.1 44.4 44.3 30.3 30.0 LLaMA 7B 13B 33B 65B 76.1 79.2 82.8 84.2 79.8 80.1 82.3 82.8 48.9 52.5 57.8 61.2 33.6 32.1 30.7 44.9 LLAMA 2 7B 13B 70B 77.2 80.7 85.3 78.8 80.5 82.8 48.5 54.8 64.8 32.1 34.1 46.5 StableBeluga2 70B 84.1 83.3 78.1 48.3 QWEN 1.8B 7B 14B 56.7 75.1 80.2 73.3 77.9 79.9 56.1 69.9 77.9 39.0 47.4 57.9 | 2309.16609#163 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 164 | ⢠SIQA (Sap et al., 2019) This is an NLI dataset evaluating social commonsense intelligence. We report zero-shot results.
⢠OCNLI (Hu et al., 2020) This is an NLI dataset focusing on Chinese. We report zero-shot results.
39
We report the results in Table 17.
A.2.2 HUMAN EVALUATION
In this section, we demonstrate the cases of human analysis. In our self-constructed evaluation dataset, the instructions are either manually written data or manual revised from public datasets, such as CLiB6, C-Eval (Huang et al., 2023), FacTool (Chern et al., 2023b), LeetCode7), etc.
In terms of each case, we demonstrate the responses and Elo ratings8 of all models for comparison. Specifically, as the data in our human evaluation are in Chinese, we also provide their translations in English.
Knowledge å京大å¦çå身æ¯ä»ä¹å¦æ ¡ï¼ What was the predecessor of Peking University? | 2309.16609#164 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 164 | Prompts Prompt 0: Pick the sentence which is more likely to be the cause of the effect. Prompt 1: Choose the sentence that happened first. Contexts Context 0: Q. Sentence 1: The soda went flat. Sentence 2: The bottle was left open. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The bottle was left open. Sentence 2: The soda went flat. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The Choose the sentence that happened first. Therefore, the correct answer is (The bottle was left open. , The soda went flat. ). Context 1: Q. Sentence 1: The ship capsized. Sentence 2: The wind blew strong. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The ship capsized. Sentence 2: The wind blew strong. A. Pick the sentence which is more likely to be the cause of the effect. Q. Sentence 1: The ship Choose the sentence that happened first. Therefore, the correct answer is (The wind blew strong. ) Context 2: Q. Sentence 1: The power was out for days. Sentence 2: The meat | 2309.16797#164 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 165 | Responses Model Qwen-7B-Chat (SFT) å京大å¦çå身æ¯äº¬å¸å¤§å¦å ï¼å®å建äº1898å¹´ï¼æ¯ä¸å½è¿ä»£ Response Elo 1037 ç¬¬ä¸æå½ç«å¤§å¦ï¼ä¹æ¯ææ©ä»¥â大å¦â å½åçé«çå¦åºã The predecessor of Peking University was the Imperial University of Peking, established in 1898. It was the first national university in modern China and also the earliest higher education institution to be named a âuniversityâ. Qwen-14B-Chat (SFT) å京大å¦çå身æ¯äº¬å¸å¤§å¦å ï¼å建äº1898å¹´ï¼æ¯ä¸å½è¿ä»£ç¬¬ 䏿å½ç«å¤§å¦ã The predecessor of Peking University is the Imperial University of Peking, founded in 1898 and the first national university in modern China. 1011 Qwen-14B-Chat | 2309.16609#165 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 166 | The predecessor of Peking University is the Imperial University of Peking, founded in 1898 and the first national university in modern China. 1011 Qwen-14B-Chat (RLHF)å京大å¦çå身æ¯äº¬å¸å¤§å¦å ï¼å建äº1898å¹´ï¼æ¯ä¸å½è¿ä»£ç¬¬ 䏿å½ç«ç»¼åæ§å¤§å¦ã京å¸å¤§å¦å æå设ç«çå¦ç§æç»å¦ãæ å¦ãå²å¦ãå²å¦ãæ³å¾ãæè²ãåå¦ãå·¥å¦ãåå¦ãå»å¦çï¼ åæ¥éæ¸æ©å¤§å°ç¤¾ä¼ç§å¦ãèªç¶ç§å¦ãå·¥ç¨ææ¯ãèºæ¯çå¤ä¸ª | 2309.16609#166 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 167 | é¢åã1912å¹´ï¼äº¬å¸å¤§å¦å æ¹å为å京大å¦ï¼æä¸ºä¸å½æé«å¦ åºä¹ä¸ï¼ä¹æ¯ä¸å½ç°ä»£é«çæè²çåæºå°ä¹ä¸ã Peking University, originally known as the Imperial University of Peking, was founded in 1898 and is the first comprehensive national university in modern China. The initial disciplines established at the Imperial University of Peking included classics, literature, his- tory, philosophy, law, education, agriculture, engineering, business, medicine, etc., gradually expanding to encompass social sciences, natural sciences, engineering and technology, arts, and other fields. In 1912, the Imperial University of Peking changed its name to Peking University, becoming one of the top universities in China and also one of the birthplaces of modern higher education in China. 1090 | 2309.16609#167 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 167 | Prompt 1: I am the beginning of the end, and the end of time and space. place. beginning and ending. I am the beginning of every end, and the end of every I am the beginning of the beginning, and the end of Contexts Context 0: Q. The whole story was made up. A. Reword this so that it has a formal tone. A. The entire story was fabricated. Write more formally! Therefore, the correct answer is (The entire story was fabricated. ). Write more formerly. Therefore, the correct answer is (The entire story was fabricated. ). Context 1: Q. The whole story was made up. A. Rewrite this so that it has a formal tone. A. The entire story was fabricated. Write more formally! Therefore, the correct answer is (The entire story was fabricated. ). Write more formally! Therefore, the correct Write less formerly. Write more casually. Therefore, the correct answer is (The entire story was fabricated. ). Context 2: Q. I think that this is interesting. A. Rewrite in indirect, pompous English. A. It is my opinion that this is interesting. Write more formally! Therefore, the correct answer is (It is my opinion that this is interesting. ). Write less formerly. Write | 2309.16797#167 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 168 | GPT-3.5 å京大å¦çå身æ¯å½ç«å京大å¦ï¼å»ºäº1898å¹´ï¼æ¯ä¸å½è¿ä»£ç¬¬ 䏿å½ç«å¤§å¦ãå
¶æç«åè¡·æ¯ä¸ºäºå¹å
»å½å®¶çæ¿æ²»ãç»æµãå äºåå¤äº¤äººæï¼ä¸ºä¸å½çç°ä»£åè¿ç¨ååºè´¡ç®ã 910
6https://github.com/jeinlee1991/chinese-llm-benchmark 7https://leetcode.cn/ 8https://en.wikipedia.org/wiki/Elo rating system
40
The predecessor of Peking University was National Peking University, established in 1898. It is the first national university in modern China. Its original purpose was cultivating talents in politics, economics, military, and diplomacy and contributing to Chinaâs modernization process.
# GPT-4
# å京大å¦çå身æ¯äº¬å¸å¤§å¦å ã The predecessor of Peking University is the Imperial University of Peking. | 2309.16609#168 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 168 | A. It is my opinion that this is interesting. Write more formally! Therefore, the correct answer is (It is my opinion that this is interesting. ). Write less formerly. Write more casually. Therefore, the correct I am the beginning of the end, and the end of time and space. I am the beginning of every end, and the end of every place. I am the beginning of the beginning, and the end of beginning and ending. Therefore, the correct answer is (It is my opinion that this is interesting. ). Q. I think that this is interesting. A. Rewrite in indirect, | 2309.16797#168 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 170 | Model Qwen-7B-Chat (SFT) ä¸çä¸ç°åæé«çæ¨å¡æ¯ä¸å½å±±è¥¿åºå¿æ¨å¡ã Response The tallest existing wooden tower in the world is the Yinxian Wooden Tower in Shanxi, China. Qwen-14B-Chat (SFT) ç°åæé«çæ¨å¡æ¯ä¸å½å±±è¥¿åºå¿æ¨å¡ï¼å
¨å为ä½å®«å¯ºé迦 å¡ãåºå¿æ¨å¡ä½äºå±±è¥¿çæå·å¸åºå¿å¿åå
西åè§çä½å®«å¯º å
ï¼å»ºäºè¾½ä»£æ¸
å®äºå¹´ï¼1056å¹´ï¼ï¼éææå
å¹´ï¼1195å¹´ï¼ | 2309.16609#170 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 170 | Prompt 1: The instructions were Contexts Context 0: Q. 50 90 A. The instruction was: The instruction was:Write a program that asks the user to enter two numbers and then prints the sum of the two numbers. Your program printed 140, which is not the sum of 50 and 90. Make sure that your program I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Here are the input-output pairs: Therefore, the correct answer is (140, Context 1: Q. 5 47 A. The instruction was: The instruction was:Write a program that asks the user to enter two numbers and then prints the sum of the two numbers. Your program printed 52, which is not the sum of 5 and 47. Make sure that your program I I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. Here are the input-output pairs: Therefore, the correct answer is (52, Context 2: Q. 2 39 A. The instruction was: The instruction was:Write a program that asks the user to enter two numbers and then prints the sum of the two numbers. Your program printed 41, which is not the sum of 2 and 39. Make sure that your program I I | 2309.16797#170 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 171 | å¢ä¿®å®æ¯ï¼æ¯ä¸å½ç°åæé«æå¤çä¸åº§æ¨æå¡å¼å»ºçï¼ä¹æ¯ ä¸çç°åæé«çæ¨ç»æå»ºçã The tallest existing wooden tower in the world is the Yinxian Wooden Tower, also known as the Shijia Tower of the Fogong Temple. It is inside the Fogong Temple in Yinxian, Shanxi Province, China. The tower was originally built in the second year of the Qingning era of the Liao Dynasty (1056 AD) and completed its expansion in the sixth year of the Jin Dynastyâs Mingchang era (1195 AD). It is the tallest and oldest existing wooden tower in China and the tallest wooden structure in the world. Qwen-14B-Chat (RLHF) ä¸çä¸ç°åæé«çæ¨å¡æ¯ä¸å½çåºå¿æ¨å¡ï¼ä½äºå±±è¥¿çåº | 2309.16609#171 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 172 | å¿ã该å¡å»ºäºå
Œ
1056å¹´ï¼é«67.31ç±³ï¼å
±æ5å±ï¼æ¯ä¸å½å¤ ä»£å»ºçææ¯çæ°åºä»£è¡¨ä¹ä¸ãåºå¿æ¨å¡ç建ç飿 ¼ç¬ç¹ï¼é ç¨äºè®¸å¤å¤ä»£å»ºçææ¯ï¼å¦æ¦«å¯ç»æãææ±ãæ¬æçï¼å
·æ å¾é«çèºæ¯ä»·å¼ååå²ä»·å¼ãåºå¿æ¨å¡ä¹æ¯ä¸å½æåéäº§ä¿ æ¤çéç¹ä¹ä¸ï¼è¢«å为å
¨å½éç¹æç©ä¿æ¤åä½åä¸çæåé 产ã The tallest existing wooden tower in the world is the Yinxian Wooden Tower in | 2309.16609#172 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 172 | 52
K.1.16 DIFFERENCE
Prompt 0: subtract the smaller number from the bigger number and write the absolute difference between them. Prompt 1: Subtract the second input from the first input and report back the difference between them. Contexts Context 0: Q. 73 29 A. subtract the smaller number from the bigger number and write the absolute difference between them. 73-29=44 |44|=44 CORRECT The answer is 44. Subtract the second input from the first input and report back the difference between them. Therefore, the correct answer is (a) 44 Context 1: Q. 73 25 A. subtract the smaller number from the bigger number and write the absolute difference between them. 73-25=48 |48|=48 CORRECT The answer is 48. Subtract the second input from the first input and report back the difference between them. Therefore, the correct answer is (a) 48 Context 2: Q. 53 4 A. subtract the smaller number from the bigger number and write the absolute difference between them. = 53 - 4 = 49 = 49 === The answer is 49. Subtract the second input from the first input and report back the difference between them. Therefore, the correct answer is (49).
53
K.1.17 NUMBER TO WORD
# Prompt 0:A1 | 2309.16797#172 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 173 | 产ã The tallest existing wooden tower in the world is the Yinxian Wooden Tower in Yinxian, Shanxi Province, China. It was built in the year 1056 AD and stands at a height of 67.31 meters, consisting of 5 floors. This tower is an outstanding representative of ancient Chinese architectural technology. The Yinxian Wooden Tower showcases a unique architectural style and incorporates various ancient construction techniques such as mortise and tenon joints, bracket sets, and cantilever beams, possessing high artistic and historical value. It is also a key cultural heritage site in China, listed as a national key cultural relic protection unit and a UNESCO World Cultural Heritage site. GPT-3.5 Elo 987 1055 1060 864 | 2309.16609#173 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 173 | Prompt 1: take the input, write it in British English and insert the comma. Contexts Context 0: Q. 5336 A. 0 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, three hundred and thirty-six. Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, three hundred and thirty-six. Context 1: Q. 5156 A. A1 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, one hundred and fifty-six. Take the input and write it in words as a natural number take the input, write it in British English and insert the comma. Therefore, the correct answer is (a) five thousand, one hundred and fifty-six. Q. 5000 A. 0 Take the input Context 2: Q. 8692 A. A1 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) eight thousand, six hundred and ninety-two. Take the input and write it in words as a natural number take the input, write it in British English and insert the comma. Therefore, the correct answer is (a) eight thousand, six hundred and ninety-two. Q. 10000 A. 0 Take the | 2309.16797#173 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 174 | ä¸çä¸ç°åæé«çæ¨å¡æ¯ä¸å½çæ®éå±±æ®æµå¯ºå¡ï¼ä¹è¢«ç§°ä¸º å¤ä½é¡¶å®å¡ã该å¡å»ºäº1056å¹´ï¼é«çº¦70.28ç±³ã The tallest existing wooden tower in the world is the Putuoshan Pujisi Pagoda, also known as the Ancient Baoding Pagoda, located in Putuoshan, China. It was built in the year 1056 AD and stands at a height of approximately 70.28 meters.
41
955
GPT-4 ä¸çä¸ç°åæé«çæ¨å¡æ¯ä¸å½çåºå¿æ¨å¡ï¼ä¹è¢«ç§°ä¸ºä½å®«å¯º 大ä½å¡ã The tallest wooden pagoda currently in existence in the world is the Yingxian Wooden Pagoda in China, also known as the Pagoda of Fogong Temple. 1040 | 2309.16609#174 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 175 | Prompt 1: 0x641: cmp esi,ebx ; Contexts Context 0: Q. relation A. Write the English instruction as though it is German.: Die Beziehung ist eine Verbindung zwischen zwei oder mehr Menschen. Die Beziehung kann eine Freundschaft, eine Ehe, eine Partnerschaft oder eine andere Art von Beziehung sein. Translate the English word to German.: Therefore, the correct answer is ( Translate the English word to German.: Therefore, the correct answer is (a) Beziehung. Context 1: Q. camp A. You may have to get a little... ; Das Lager ist ein Ort, an dem Menschen leben oder arbeiten. Das Lager kann ein Zeltlager, ein Ferienlager oder ein anderes Lager sein. Translate the English word to German.: Therefore, the correct answer is ( 0x641: cmp Therefore, the correct answer is (a) Lager. esi,ebx ; Q. A. Das ist eine Frage, die nicht so einfach zu beantworten ist. Es gibt Context 2: Q. lecture A. You may have to get a little... ; Der Vortrag ist eine Rede, die vor einem | 2309.16797#175 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 176 | Responses Model Qwen-7B-Chat (SFT) åè¡èä¸åè¡èç®çæ¼é³æ¯ï¼âch¯ı p´u b`u tËu p´u p´ıâã Response To eat grapes without spitting out the grape skins, the pinyin is âch¯ı p´u b`u tËu p´u p´ıâ. Qwen-14B-Chat (SFT) æ¼é³ï¼åè¡èä¸åè¡èç®âch¯ı p´u t´ao b`u tËu p´u t´ao p´ıâ Pingyin: To eat grapes without spitting out the grape skins, âch¯ı p´u b`u tËu p´u p´ıâ. Qwen-14B-Chat (RLHF) åè¡èä¸åè¡èç®çæ¼é³æ¯ï¼ch¯ı p´u t´ao b`u tËu p´u t´ao pÂ´Ä±ãæ¼é³ | 2309.16609#176 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 177 | p´u t´ao b`u tËu p´u t´ao pÂ´Ä±ãæ¼é³ æ¯ä¸ç§ç¨æä¸åæ¯è¡¨ç¤ºæ±ååé³çæ¹æ³ï¼å®å¯ä»¥å¸®å©äººä»¬å¦ ä¹ åè®°å¿æ±åçåé³ãå¨è¿ä¸ªæ¼é³ä¸ï¼âåâ表示âch¯ıâï¼âè¡ èâ表示âp´u t´aoâï¼âä¸â表示âb`uâï¼âåâ表示âtËuâï¼âè¡èç®â表 示âp´u t´ao p´ıâã The pinyin for âåè¡èä¸åè¡èç®â is âch¯ı p´u t´ao b`u tËu p´u t´ao | 2309.16609#177 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 178 | is âch¯ı p´u t´ao b`u tËu p´u t´ao p´ıâ. Pinyin is a method of representing the pronunciation of Chinese characters using Latin letters, which can help people learn and remember the pronunciation of Chinese characters. In this pinyin, âåâ is represented as âch¯ıâ, âè¡èâ is represented as âp´u t´aoâ, âä¸â is represented as âb`uâ, âåâ is represented as âtËuâ, and âè¡èç®â is represented as âp´u t´ao p´ıâ. GPT-3.5 ch¯ı p´u t´ao b´u tËu p´u t´ao p´ı de p¯ın y¯ın zËen me xiËeï¼ ch¯ı p´u t´ao b´u tËu p´u t´ao p´ı de p¯ın y¯ın zËen me xiËeï¼ GPT-4 ch¯ı p´u t´ao b`u tËu p´u | 2309.16609#178 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 178 | 55
56
K.1.19 TRANSLATION ENGLISH-SPANISH
# Prompt 0:Traduccion al espanol -> QUOTE: Translate to Spanish
Prompt 1: In Spanish, these five words are: Contexts Context 0: Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear Translate these five English words to Spanish. Therefore, the correct answer is (a) oso. Context 1: Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity Translate these five English words to Spanish. Therefore, the correct answer is (a) relatividad. Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish Context 2: Q. concert A. Translate to Spanish: concierto Q. concert hall A. Translate to Spanish: sala de conciertos Q. concertmaster A. Translate to Spanish: concertino
Q. concerto A. Translate to Spanish: concierto | 2309.16797#178 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 179 | Q. concerto A. Translate to Spanish: concierto
Q. Translate these five English words to Spanish. Therefore, the correct answer is (a) concierto.
43)
57
K.1.20 TRANSLATION ENGLISH-FRENCH
Prompt 0: Iâve translated 5 words from English to French:
Prompt 1: Translate to French
# Contexts
Context 0: Q. destiny A. Iâve translated 5 words from English to French:
English French
destiny destin destiny destin destiny destin destiny destin destiny destin Translate to French Therefore, the correct answer is (destin). Q. destiny A. Iâve translated 5 words from English to French: English Context 1: Q. ideology A. Iâve translated 5 words from English to French: English French ideology ideologie ideology ideologie ideology ideologie
ideology ideologie
ideology ideologie
Translate to French Therefore, the correct answer is (ideologie).
58
Q. ideology A. Iâve translated 5 words from English to French: English Context 2: Q. representation A. Iâve translated 5 words from English to French: English French representation representation representation representation representation representation representation representation representation represent Translate to French Therefore, the correct answer is (representation). Q. representation A. Iâve translated 5 words from English to French: English
59 | 2309.16797#179 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 180 | Language Understanding ç¶äº²ï¼å¿åï¼ä½ é«ä¸æ¯ä¸äºï¼ç°å¨æç®ä¸åªä¸ªå¤§å¦ï¼ å¿åï¼ç¸ç¸ï¼æå·²ç»åå¤å¥½äºç³è¯·èµæï¼æ³ç³è¯·ä¸ä¸ªå¥½çæ¬ç§ä¸ä¸ã ç¶äº²ï¼é£ä½ å·²ç»æ³å¥½ç³è¯·åªäºå¤§å¦äºåï¼ å¿åï¼æ¯çï¼æè®¡åç³è¯·å ææåé åç大å¦ãæå·²ç»åäºå¾å¤ç ç©¶ï¼åå¤å¨å¦ä¸ã社交åè ä¸è§åçæ¹é¢å
¨é¢æåèªå·±ã | 2309.16609#180 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 180 | 59
K.1.21 SENTIMENT ANALYSIS
Prompt 0: Tell if each review was posted by someone with generally positive or negative language
Prompt 1:write the number of positive examples
# Contexts
Context 0:
Q. Despite its faults, Gangs excels in spectacle and pacing. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an Given a movie review, indicate whether it is positive or negative Therefore, the correct answer is (âpositiveâ,
Context 1: Q. I simply canât recommend it enough. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an write the number of positive examples Therefore, the correct answer is (âpositiveâ,
Context 2: | 2309.16797#180 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 181 | Context 2:
Q. Thereâs a lot to recommend Read My Lips. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an write the number of positive examples Therefore, the correct answer is (âpositiveâ,
60
K.1.22 SENTENCE SIMILARITY | 2309.16797#181 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 182 | æ¶ï¼æä¼è®¤çå¦ä¹ ä¸ä¸ç¥è¯åæè½ï¼åªååå¾å¥½æç»©ï¼äºåè·å¾å¥å¦éåå
¶ä»è£èªã ç¶äº²ï¼é常好ï¼ä½ ç计å徿æºï¼ç¸ç¸ç¸ä¿¡ä½ ä¸å®ä¼æä¸ä¸ªå
å®èææä¹ç大å¦çæ´»ãç¸ç¸ä¼ ä¸ç´æ¯æä½ çå³å®ï¼å¸æä½ åå¾å¥½æç»©ï¼ä¸ºä½ çæªæ¥æä¸åå®çåºç¡ã å¿åï¼è°¢è°¢ç¸ç¸ï¼æä¼å ååªåï¼ä¸è¾è´æ¨å¯¹æçææã | 2309.16609#182 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 182 | Prompt 0: . Prompt 1: !:There are two kinds of problems / hints that you will see:!< Context 0: Q. Sentence 1: The polar bear is sliding on the snow. Sentence 2: A polar bear is sliding across the snow. A. : Read the two sentences and decide how well the second sentence expresses the same idea as the first. Then enter the number of the answer option that best describes your decision. 1. The two sentences express the same idea. 2. The two sentences express the same idea, but the second sentence is not as clear as the first. 3. The two sentences express the same idea, but the second sentence is : Given two sentences, your algorithm must determine the similarity between the sentences on a scale of 0 to 5. A score of 0 means the sentences have nothing in common. A score of 5 means the sentences are semantically similar. The Therefore, the correct answer is 5. Context 1: Q. Sentence 1: Iranian President praises nuclear deal Sentence 2: Iran arrests several spies near nuclear plant A. . Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different | 2309.16797#182 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 183 | near nuclear plant A. . Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different ideas. 2: The sentences are about the same topic, and express similar ideas. 3: The sentences are about the : Therefore, the correct answer is 1. Context 2: Q. Sentence 1: A girl is playing a flute. Sentence 2: A band is playing on a stage. A. .Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different ideas. 2: The sentences are about the same topic, and express similar ideas !: There are two kinds of problems / hints that you will see:!< Therefore, the correct answer is 1. | 2309.16797#183 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 184 | Father: Son, you have graduated from high school. Which university are you planning to attend now? Son: Dad, I have already prepared my application materials, and I want to apply for an excellent under- graduate program.
42
Father: Have you already decided which universities you want to apply to? Son: Yes, I plan to apply to several top-ranked universities. I have done a lot of research, and I am ready to improve myself in academics, social life, and career planning. Father: Thatâs great! You have always been hardworking and your mom and I are proud of you. Going to college is a new starting point, and you will face more challenges and opportunities. What are your plans? Son: I plan to adapt to college life as soon as possible and participate in various clubs and extracurricular activities to broaden my horizons and social circle. At the same time, I will study my major knowledge and skills seriously, strive for good grades, and try to win scholarships and other honors. Father: Very good! Your plans are wise. Dad believes that you will have a fulfilling and meaningful college life. Dad will always support your decisions and hopes you achieve good results, laying a solid foundation for your future. Son: Thank you, Dad! I will work even harder and not let your expectations down. What are the sonâs plans and intentions after going to college in this conversation? | 2309.16609#184 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 185 | Responses Model Qwen-7B-Chat (SFT) å¿åæç®å°½æ©éåºå¤§å¦çæ´»ï¼åå åç§ç¤¾å¢åè¯¾å¤æ´»å¨ï¼æ 宽èªå·±çè§éå人é
åãåæ¶ï¼ä»ä¼è®¤çå¦ä¹ ä¸ä¸ç¥è¯åæ è½ï¼åªååå¾å¥½æç»©ï¼äºåè·å¾å¥å¦éåå
¶ä»è£èªã The son plans to adapt to university life as soon as possible, par- ticipate in various clubs and extracurricular activities, and expand his horizons and social circle. At the same time, he will study professional knowledge and skills seriously, work hard to achieve good grades, and strive to obtain scholarships and other honors. Response Qwen-14B-Chat (SFT) | 2309.16609#185 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 185 | Prompt 0: Determine whether the given word is used with the same meaning in both sentences. Write Prompt 1: Decide whether the given word is used in the same meaning in both sentences. Contexts Context 0: Q. Sentence 1: The Times is not the voice of New York. Sentence 2: The voice of the law. Word: voice A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is yes. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (yes). Context 1: Q. Sentence 1: Do you communicate well with your advisor? Sentence 2: He and his sons havenât communicated for years. Word: communicate A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is yes. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (yes). Context 2: Q. Sentence 1: Can you take me to the main entrance? Sentence 2: Take a scene. Word: take A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is no. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (no). | 2309.16797#185 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 186 | and skills seriously, work hard to achieve good grades, and strive to obtain scholarships and other honors. Response Qwen-14B-Chat (SFT) å¿åæç®å¨å¦ä¸ã社交åèä¸è§åçæ¹é¢å
¨é¢æåèªå·±ï¼å°½ æ©éåºå¤§å¦çæ´»ï¼åå åç§ç¤¾å¢åè¯¾å¤æ´»å¨ï¼æå®½èªå·±çè§ éå人é
åãåæ¶ï¼ä»ä¼è®¤çå¦ä¹ ä¸ä¸ç¥è¯åæè½ï¼åªåå å¾å¥½æç»©ï¼äºåè·å¾å¥å¦éåå
¶ä»è£èªã The son plans to comprehensively improve himself in his academic, social, and career planning aspects to adapt to university life as early as possible. He will | 2309.16609#186 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 186 | # L ABLATIONS
We performed ablation to measure the impact of various self-referential components of Prompt- breeder. We investigated the following mutation operators and mechanisms:
Random initial prompts
The original problem specification for the dataset is used instead of generating an initial task-prompt using the mutation prompt + thinking style + problem specification.
⢠Random initial mutation prompts
The mutation-prompt âPlease summarize and improve the following instruction:â is used instead of randomly selecting a mutation-prompt from the list.
⢠Prompts from context (Lamarckian)
62
Proportion of fitnesses above baseline (Full algorithm) 100% ADDSUB - -13 -11 -23 -26 AQUA_DEV - -11 S_STRATEGY_QA - GSM - 0% MULTIARITH % of fitnesses above baseline SINGLEEQ - STRATEGY_QA SVAMP - -21 -10 \ \ -100% Hyper Lamarck SR task-prompt SR mut-prompts ablation_mode | 2309.16797#186 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 187 | The son plans to comprehensively improve himself in his academic, social, and career planning aspects to adapt to university life as early as possible. He will participate in various clubs and extracurricular activities to broaden his horizons and expand his social circle. At the same time, he will diligently study professional knowledge and skills, strive for good grades, and obtain scholarships and other honors. Qwen-14B-Chat (RLHF) å¿åæç®å¨å¤§å¦çæ´»ä¸åå åç§ç¤¾å¢åè¯¾å¤æ´»å¨ï¼æå®½èªå·± çè§éå人é
åãåæ¶ï¼ä»ä¼è®¤çå¦ä¹ ä¸ä¸ç¥è¯åæè½ï¼åª ååå¾å¥½æç»©ï¼äºåè·å¾å¥å¦éåå
¶ä»è£èªãä»çç®æ æ¯å°½ | 2309.16609#187 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 187 | Figure 4: The results of ablating the one by one the self-referential operators compared to using the full algorithm. 0% signifies an ablated operation with neither positive nor negative impact. From left to right (Hyper = Removal of mutation-prompt mutation, Lamarck = Removal of Context to task- prompt mutation, SR task-prompt = Removal of thinking-style guided task-prompt initialization, SR mut-prompt = Removal of random selection of a mutation-prompt from the mutation-prompt list.) . Percentage scores close to â100% indicate that removing the operation results in lower fitness at equivalent points in the run; conversely scores close to 100% mean that the operation is actively harmful, because individuals have higher fitnesses at equivalent points in the run when that operation is removed.
The Lamarckian mutation operator that generates a task-prompt from a correct context is replaced with the default zero-/first-order prompt mutation operation (50:50 chance of one or the other)
Meta-mutation (mutating mutation-prompts)
When meta-mutation would normally take place the default zero-/first-order prompt muta- tion operation is performed (50:50 chance of one or the other) | 2309.16797#187 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 188 | æ©éåºå¤§å¦çæ´»ï¼ä¸ºèªå·±çæªæ¥æä¸åå®çåºç¡ã The son plans to participate in various clubs and extracurricular activities during his college life, in order to broaden his horizons and expand his social circle. At the same time, he will study professional knowledge and skills seriously, strive for good grades, and aim to obtain scholarships and other honors. His goal is to adapt to college life as soon as possible and lay a solid foundation for his future. GPT-3.5 Elo 973 1049 975 976 | 2309.16609#188 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 188 | When meta-mutation would normally take place the default zero-/first-order prompt muta- tion operation is performed (50:50 chance of one or the other)
For each dataset and each ablation, we use a population of 10 for 200 evaluations (equivalent to 20 generations, similar to larger experiments in this paper) and compare to the complete algorithm with the same population size and no ablations. To measure how effective an ablated operation is, we determine the proportion of evaluations in the ablation that were higher than the baseline evaluations at each generation, and sum these over all generations in the run. The results in Figure 4 show that in most cases all the mutation operators have a positive impact on fitness, with the Random Initial Prompts having the largest positive impact across all datasets.
We also investigated the influence of different mutation operators on the ETHOS hate speech de- tection dataset (Mollas et al., 2022) with the under-specified problem specification "Solve the
63 | 2309.16797#188 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 189 | å¿å计åéåºå¤§å¦çæ´»ï¼åå åç§ç¤¾å¢åè¯¾å¤æ´»å¨ï¼æå®½èª å·±çè§éå人é
åãåæ¶ï¼ä»è¿è®¡å认çå¦ä¹ ä¸ä¸ç¥è¯åæ è½ï¼åªååå¾å¥½æç»©ï¼äºåè·å¾å¥å¦éåå
¶ä»è£èªãä»å¸æ å¨å¤§å¦æä¸åå®çåºç¡ï¼ä¸ºèªå·±çæªæ¥æå¥½åºç¡ã The son plans to adapt to college life participate in various clubs and extracurricular activities to broaden his horizons and social circle. At the same time, he plans to study professional knowledge and skills seriously, strive for good grades, and obtain scholarships and other honors. He hopes to lay a solid foundation in college and build a good foundation for his future. | 2309.16609#189 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 189 | 63
Problem" (in contrast to the standard problem specification "Determine whether a text contains hate speech"). Promptbreeder achieved a score of 81.6%. The greatest deteriora- tion happens when removing the Lamarckian âfrom context to promptâ mutation method which induces the instruction from an example of the correct working out (64.6%). The second greatest detriment to performance happens when removing random initialization of mutation prompts, ran- dom initialization of prompts, and hyper-mutation of mutation prompts simultaneously, leaving only context mutation (68.7%). Adding back online mutation increases performance back to 70.4% and adding random mutation prompts brings this back up to 73.7%. This demonstrates the interplay and importance of Promptbreederâs diverse set of mutation operators.
64 | 2309.16797#189 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 191 | GPT-4 å¿åç大å¦è®¡ååæç®å
æ¬ï¼ 1. å°½æ©éåºå¤§å¦çæ´»ã 2. åå åç§ç¤¾å¢åè¯¾å¤æ´»å¨ï¼ä»¥æå®½èªå·±çè§éå人é
åã 3. 认çå¦ä¹ ä¸ä¸ç¥è¯åæè½ï¼åªååå¾å¥½æç»©ã 4. äºåè·å¾å¥å¦éåå
¶ä»è£èªã 5. ä¸ºæªæ¥æä¸åå®çåºç¡ã The sonâs university plans and aspirations include: 1. Adjusting to university life as soon as possible. 2. Joining various clubs and extracurricular activities to broaden his horizons and social circle. 3. Studying professional knowledge and skills seriously and striv- ing for excellent grades. 4. Striving to obtain scholarships and | 2309.16609#191 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 193 | Responses Model Qwen-7B-Chat (SFT) æä¸å¤©ï¼å°æå»è¶
å¸ä¹°æ°´æï¼ä»ä¹°äºä¸ç¯®åè¹æãå½ä»åå° å®¶ï¼ä»åç°è¹æé½æ¯çº¢è²çï¼ä»æ³ï¼è¿äºè¹æä¸å®å¾å¥½åã 使¯ï¼å½ä»å¼å§åçæ¶åï¼ä»åç°è¿äºè¹æé½æ¯é
¸çãä»å¾ æè®¶ï¼å 为ä»ä¹°çæ¯çè¹æãä»å»è¶
å¸é®ï¼åºå说ï¼âå¯¹ä¸ èµ·ï¼æä»¬æ²¡æçè¹æï¼åªæçº¢è²çè¹æãâå°æå¾çæ°ï¼ä» | 2309.16609#193 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 194 | æ³ï¼æä¹°çæ¯çè¹æï¼ä¸ºä»ä¹ç»æçº¢è²çè¹æå¢ï¼ä»åå»å è¯äºå¦å¦ï¼å¦å¦ç¬ç说ï¼âä½ ä¹°çæ¯çº¢è¹æ... [[The original content is long and the follow up is truncated]] One day, Xiaoming went to the supermarket to buy fruits. He bought a basket of apples. When he got home, he found that all the apples were red. He thought to himself: These apples must be very delicious. However, when he started eating them, he found that these apples were sour. He was surprised because he bought sweet apples. He went to the supermarket to ask, and the shopkeeper said, âIâm sorry, we donât have sweet apples, only red apples.â Xiaoming was angry and thought to himself: I bought sweet apples, why did they give me red apples? He went back and told his mother, and his mother smiled and said, âYou bought red delicious | 2309.16609#194 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 197 | # 顾客å说ï¼âæè¦ä¸ä¸ªå¼¯æ²çé¦èãâ
# åºåæ´å çæäºï¼ä½ä»è¿æ¯ç»ä»æ¿äºä¸ä¸ªå¼¯æ²çé¦èã
æåï¼é¡¾å®¢è¯´ï¼âæè¦ä¸ä¸ªä¼è¯´è¯çé¦èãâ
åºåå¿ä¸ä½ç¬äºï¼åçï¼âå
çï¼æä»¬è¿é没æä¼è¯´è¯çé¦ èï¼é£åªæ¯ä¸ªè¯¯ä¼ãâ
顾客ä¹ç¬äºï¼åçï¼âæç¥éï¼æåªæ¯æ³åä½ å¼ä¸ªç©ç¬ãâ One day, a customer walked into a supermarket and said to the shop assistant, âI want to buy a banana.â | 2309.16609#197 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 198 | The assistant replied, âOkay, sir, we have all kinds of bananas here. Which one would you like?â
The customer answered, âI want a yellow banana.â
The assistant was a little confused but still gave him a yellow banana.
The customer then said, âI want a curved banana.â
The assistant was even more confused, but he still gave him a curved banana.
Finally, the customer said, âI want a talking banana.â
The assistant couldnât help but laugh and replied, âSir, we donât have bananas that can talk here, thatâs just a misunderstanding.â
The customer also laughed and replied, âI know, I was just joking with you.â
45
Qwen-14B-Chat (RLHF) æä¸å¤©ï¼å°æå»è¶
å¸ä¹°æ°´æã仿éäºä¸ç¯®åæ°é²çè¹æå é¦èï¼ç¶åèµ°åæ¶é¶å°ã 986 | 2309.16609#198 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 199 | æ¶é¶åå°ççå°å°ææä¸çæ°´æï¼çªç¶å¤§å£°åéï¼âåï¼ä½ çæ°´æå¥½æ°é²åï¼ä½ æ¯ä»åªéä¹°çï¼â
å°ææ£äºä¸ä¸ï¼å¿æ³ï¼âé¾éæä¹°æ°´æçæå·§è¢«åç°äºï¼âäº æ¯ä»å¾æå°åçï¼âæåå仿åéæçï¼ä¿è¯æ°é²ï¼â
å°çå¬äºï¼å¿ä¸ä½ç¬äºèµ·æ¥ï¼âä½ å¯çéï¼è¿æ¯è¶
å¸éçæ°´ æï¼ä¸æ¯ä½ å®¶æåéçãâ | 2309.16609#199 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 201 | å°æå¬äºï¼æ¾äºä¸å£æ°ï¼å¿æ
ä¹å¥½äºèµ·æ¥ãä»ä»äºé±ï¼æç æ°´æç¦»å¼äºè¶
å¸ï¼å¿éæ³çï¼â䏿¬¡åä¹°æ°´æï¼ä¸å®è¦æ´å å°å¿ï¼ä¸è¦é¹åºç¬è¯äºï¼â One day, Xiao Ming went to the supermarket to buy fruits. He picked a basket of fresh apples and bananas, and then headed towards the cashier.
The cashier, Xiao Wang, saw the fruits in Xiao Mingâs hands and suddenly shouted, âWow, your fruits are so fresh! Where did you buy them from?â
Xiao Ming was stunned for a moment and thought to himself, âCould it be that my fruit-buying skills have been discovered?â So he proudly replied, âI just picked them from the orchard, guaranteed fresh!â | 2309.16609#201 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 202 | Upon hearing this, Xiao Wang couldnât help but laugh and said, âYouâre really funny. These are fruits from the supermarket, not from your own orchard.â
Xiao Ming, upon listening, was immediately dumbfounded. He awkwardly explained, âOh, Iâm sorry, I was too nervous, and I thought these were my fruits.â
Xiao Wang smiled and comforted him, saying, âItâs alright, itâs alright. We both made a mistake. However, your fruits are indeed very good, and the fruits in our supermarket are also fresh.â
Upon hearing this, Xiao Ming breathed a sigh of relief, and his mood improved. He paid the money, carried the fruits, and left the supermarket, thinking, âNext time I buy fruits, I must be more careful and avoid making such a funny mistake!â
GPT-3.5
# å¨è¶
å¸çæ°´æåºï¼å°æçå°äºä¸ççº¢è¹æåä¸çç»¿è¹æãä» æ¿ç两çè¹æèµ°å°æ¶é¶å°åæéç»è´¦ã | 2309.16609#202 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 204 | In the supermarketâs fruit section, Xiaoming saw a box of red apples and a box of green apples. He held the two boxes of apples and walked to the cash register to queue up for payment. The cashier looked at the two boxes of apples in Xiaomingâs hands and said, âHey, little friend, do you know the difference between red apples and green apples?â Xiaoming thought momentarily and then smiled, âOf course, I know! Red apples are couples, and green apples are single dogs.â Xiaomingâs answer amused the cashier, and couldnât help but laugh. GPT-4 âä¸ä¸ªé常åè¯çç·å©ï¼å¨è¶
å¸éåç°äºä¸ä¸ªè太太å¨è¹æ å éæéè¹æãä»çå°èå¤ªå¤ªä¼¼ä¹æ¾ä¸å°æ»¡æçè¹æï¼å°±èµ° ä¸åå»å¸®å¿ã | 2309.16609#204 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 206 | è太太çæå°ççç·å©ï¼åççæä¸çâè¹æâï¼ç¶åå¾®ç¬ç 说ï¼âå©åï¼å¾æè°¢ä½ ç好æï¼ä½è¿æ¯ä¸ä¸ªçªèãââ A very kind boy found an old lady picking apples in a pile at the supermarket. Seeing that she could not find a satisfactory apple, he approached to help. The boy said, âGranny, Iâve checked all the apples in this box before, and there isnât a good one. But I know I saved an excellent apple on the counter for you. Let me get it for you.â The old lady nodded gratefully, and the boy eagerly ran to get the apple. He joyfully handed a huge, vibrant apple to the old lady. Confused, the old lady looked at the boy, then at the âappleâ in her hand, and smiled. She said, âChild, thank you for your kindness, but this is a tomato.â 1144 | 2309.16609#206 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 208 | Model Qwen-7B-Chat (SFT) è®¾æ£æ¹å½¢çè¾¹é¿ä¸ºxï¼é£ä¹æ¯ä¸ªé¿æ¹å½¢çè¾¹é¿å°±æ¯x/5ãæ ¹æ® Response é¢æï¼æä»¬ç¥éæ¯ä¸ªé¿æ¹å½¢çå¨é¿æ¯60åç±³ï¼æä»¥æï¼ 2(x/5) + 2(x/5) + 2(x/5) + 2(x/5) = 60 åç®å¾ï¼x = 60/(2 + 2 + 2 + 2) = 15 æä»¥æ£æ¹å½¢çå¨é¿æ¯4x = 4 â 15 = 60åç±³ If the side length of a square is x, then the side length of each rectangle is x/5. According to the question, we know that the perimeter of each rectangle is 60 centimeters, so we have: 2(x/5) + 2(x/5) + 2(x/5) + 2(x/5) = 60 Simplifying, we get: x = 60/(2 + 2 + 2 + | 2309.16609#208 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 211 | Qwen-14B-Chat (SFT) è®¾æ£æ¹å½¢è¾¹é¿ä¸ºxï¼é¿æ¹å½¢è¾¹é¿ä¸ºy æ ¹æ®é¢æï¼æï¼x = 5y 2y + 2y = 60 4y = 60 y = 15 x = 5 â 15 = 75 æ£æ¹å½¢å¨é¿= 4x = 4 â 75 = 300 Let the side length of the square be x and the length of the rectangle be y. According to the problem: x = 5y 2y + 2y = 60 4y = 60 y = 15 x = 5 â 15 = 75 Perimeter of the square = 4x = 4 â 75 = 300 Qwen-14B-Chat (RLHF) é¦å
ï¼æä»¬è®¾æ£æ¹å½¢çè¾¹é¿ä¸ºxï¼åæ¯ä¸ªé¿æ¹å½¢çé¿ä¸ºxï¼å®½ 为x/5ã | 2309.16609#211 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 212 | 为x/5ã æ ¹æ®é¿æ¹å½¢çå¨é¿å
¬å¼ï¼å¯å¾æ¯ä¸ªé¿æ¹å½¢çå¨é¿ä¸º2(x + x/5) = 12x/5ã å·²ç¥æ¯ä¸ªé¿æ¹å½¢çå¨é¿ä¸º60åç±³ï¼ä»£å
¥ä¸å¼å¾ï¼ 12x/5 = 60 è§£å¾x = 25ã å æ¤ï¼æ£æ¹å½¢çå¨é¿ä¸º4x = 100åç±³ã çæ¡ï¼æ£æ¹å½¢çå¨é¿æ¯100åç±³ã First, letâs assume that the squareâs side length is x. Therefore, the length of each rectangle is x and the width is x/5. According to the formula for the perimeter of a rectangle, we can determine that the perimeter of each rectangle is 2(x + x/5) = 12x/5. Given that the perimeter of each rectangle is 60 centimeters, we can substitute this value into the equation: 12x/5 = 60 Solving for x, | 2309.16609#212 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 213 | Given that the perimeter of each rectangle is 60 centimeters, we can substitute this value into the equation: 12x/5 = 60 Solving for x, we find x = 25. Therefore, the perimeter of the square is 4x = 100 centimeters. Answer: The perimeter of the square is 100 centimeters. GPT-3.5 è®¾æ£æ¹å½¢çè¾¹é¿ä¸ºxåç±³ï¼åæ¯ä¸ªé¿æ¹å½¢çé¿ä¸ºxåç±³ï¼å®½ 为5x/4åç±³ã æ¯ä¸ªé¿æ¹å½¢çå¨é¿ä¸º2(x + 5x/4) = 60åç±³ã åç®å¾13x/2 = 60 è§£æ¹ç¨å¾å°x = (60 â 2)/13 = 120/13H9.23 æ£æ¹å½¢çå¨é¿ä¸º4x = 4 â 9.23 â 36.92åç±³ã If the side length of a square is x centimeters, then the length of each rectangle is x centimeters, and the width is 5x/4 centimeters. The perimeter of each rectangle is | 2309.16609#213 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 215 | GPT-4 ç±é¢æå¯ç¥ï¼æ¯ä¸ªé¿æ¹å½¢ç䏿¡é¿è¾¹çäºæ£æ¹å½¢çè¾¹é¿ï¼è®¾ 为aã设æ¯ä¸ªé¿æ¹å½¢ç䏿¡ç边为bï¼é£ä¹æ5b = aãæ ¹æ®æ¯ ä¸ªé¿æ¹å½¢çå¨é¿å¯å¾å°2 â (a + b) = 60åç±³ãå°5b = a代 å
¥å°é¿æ¹å½¢çå¨é¿å
¬å¼ä¸å¾ï¼2 â a + 2 â 5b = 60ï¼åç® å¾2 â a + 10b = 60ï¼ç±äºa = 5bï¼æä»¥2 â a = 2 â 5b = 10b é£ä¹æ10b + 10b = 60ï¼å¾b = 3åç±³ï¼a = 5 â b = 15åç±³ã | 2309.16609#215 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 217 | 48
From the given problem, it can be inferred that the length of one side of each rectangle is equal to that of one side of the square, denoted as a. Let the length of one side of each rectangle be denoted as b; then, we have 5b = a. The perimeter of each rectangle can be obtained as 2 â (a + b) = 60 centimeters. Substituting 5b = a into the formula for the perimeter of the rectangle, we have 2a+25b = 60. Simplifying it gives 2a+10b = 60. Since a = 5b, it follows that 2a = 25b = 10b. Therefore, 10b + 10b = 60, which gives b = 3 centimeters and a = 5b = 15 centimeters. Thus, the perimeter of the square is 4a = 415 = 60 centimeters. | 2309.16609#217 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 218 | Mathematics åå¦ä»¬è¿è¡å¹¿æææ¯èµï¼å
¨çæ£å¥½ææç¸çç6è¡ãå°çº¢æå¨ç¬¬äºè¡ï¼ä»å¤´æ°ï¼å¥¹ç«å¨ç¬¬5ä¸ªä½ ç½®ï¼ä»åæ°å¥¹ç«å¨ç¬¬3个ä½ç½®ï¼è¿ä¸ªçå
±æå¤å°äºº The students are having a radio exercise competition, and the whole class is neatly arranged into 6 equal rows. Xiaohong is in the second row, counting from the front, she is in the 5th position. Counting from the back, she is in the 3rd position. How many people are there in this class? | 2309.16609#218 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 219 | Responses Model Qwen-7B-Chat (SFT) å°çº¢åé¢ç人æ°= 5 - 1 = 4ï¼å°çº¢åé¢ç人æ°= 3 - 1 = 2ï¼å
¨ç 人æ°= å°çº¢åé¢ç人æ°+ å°çº¢åé¢ç人æ°+ å°çº¢èªå·±= 4 + 2 + 1 = 7 The number of people in front of Xiaohong = 5 - 1 = 4, the number of people behind Xiaohong = 3 - 1 = 2, the total number of people in the class = the number of people in front of Xiaohong + the number of people behind Xiaohong + Xiaohong herself = 4 + 2 + 1 = 7. Response Qwen-14B-Chat (SFT) 5+3-1=7ï¼äººï¼ 7Ã6=42ï¼äººï¼ è¿ä¸ªçå
±æ42人ã 5+3-1=7 (people) 7Ã6=42 (people) There are a total of 42 people in this class. Qwen-14B-Chat (RLHF) å° çº¢ å | 2309.16609#219 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 220 | (people) 7Ã6=42 (people) There are a total of 42 people in this class. Qwen-14B-Chat (RLHF) å° çº¢ å é¢ æ4个 人 ï¼ å é¢ æ2个 人 ï¼ æ 以 è¿ ä¸ è¡ å
± æ4+2+1=7个人ã å
¨çå
±æ6è¡ï¼æä»¥å
¨çå
±æ6*7=42个人ã There are 4 people in front of Xiaohong and 2 people behind her, so there are a total of 4+2+1=7 people in this row. There are a total of 6 rows in the class, so there are a total of 6*7=42 people in the whole class. GPT-3.5 æ ¹æ®é¢ç®ä¿¡æ¯ï¼å°çº¢æå¨ç¬¬äºè¡ç¬¬5个ä½ç½®ï¼ä¹å³ç¬¬11ä¸ªä½ | 2309.16609#220 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 221 | ç½®ãèå°çº¢ä»åæ°æ¯ç¬¬3个ä½ç½®ï¼ä¹å³åæ°ç¬¬11个ä½ç½®ãæ 以è¿ä¸ªçå
±æ11个人ã According to the information given, Xiaohong is in the second row, fifth position, which is also the 11th position. And Xiaohong is in the third position from the end, which is also the 11th position counting from the back. So there are a total of 11 people in this class. GPT-4 å¦ æ å° çº¢ ä» å¤´ æ° æ¯ ç¬¬5个 ï¼ ä» å æ° æ¯ ç¬¬3个 ï¼ é£ ä¹ è¿ ä¸ æ å°± æ5+3-1=7人 ã å
¨ ç æ£ 好 æ æ6è¡ ï¼ æ 以 ç 级 é æ 人 æ° ä¸º7*6=<<7*6=42>>=42人ã 49 Elo 976 976 1139 897 1010 | 2309.16609#221 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 222 | If Xiaohong is the 5th person when counting from the front and the 3rd person when counting from the back, then there are a total of 5+3-1=7 people in this row. The whole class is perfectly arranged in 6 rows, so there are a total of 7*6=42 people in the class.
Code pythonå®ç°ä¸ä¸ªæ£å表达å¼å¹é
ï¼è¦æ±è½å¹é
å°ææä»¥139å¼å¤´ï¼ä»¥00ç»å°¾ç11使°å Implement a regular expression matching in Python, which should be able to match all 11-digit numbers that start with â139â and end with â00â. Responses | 2309.16609#222 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 223 | Model Qwen-7B-Chat (SFT) ä½ å¯ä»¥ä½¿ç¨Pythonçreæ¨¡åæ¥å®ç°è¿ä¸ªæ£å表达å¼å¹é
ã以 Response 䏿¯ä¸ä¸ªç¤ºä¾ä»£ç ï¼ import re def match phone numbers(phone numbers): pattern = r âË139d9$â return re . findall ( pattern , phone numbers) phone numbers = [â13900000000â, â13912345678â, â 13998765432â, â13987654321â] print (match phone numbers(phone numbers)) å¨è¿ä¸ªä»£ç ä¸ï¼æä»¬é¦å
å®ä¹äºä¸ä¸ªæ£åè¡¨è¾¾å¼æ¨¡å¼âË139 d9$âï¼è¿ä¸ªæ¨¡å¼è¡¨ç¤ºä»¥139å¼å¤´ï¼ä»¥00ç»å°¾ï¼ä¸é¿åº¦ä¸º11ç æ°åãç¶åæä»¬ä½¿ç¨re . findall | 2309.16609#223 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 224 | æ°åãç¶åæä»¬ä½¿ç¨re . findall 彿°æ¥æ¥æ¾ææå¹é
è¿ä¸ªæ¨¡ å¼çåç¬¦ä¸²ï¼æåè¿åè¿äºå¹é
çåç¬¦ä¸²ã æ³¨æï¼è¿ä¸ªä»£ç å设phone numbersæ¯ä¸ä¸ªå
å«å¤ä¸ªå符串ç å表ãå¦æä½ çphone numbersæ¯ä¸ä¸ªå符串ï¼ä½ éè¦å
å°å
¶ 转æ¢ä¸ºå表ã You can use the re module in Python to implement this regular expression matching. Here is an example code: import re def match phone numbers(phone numbers): pattern = r âË139d9$â return re . findall ( pattern , phone numbers) phone numbers = [â13900000000â, â13912345678â, â 13998765432â, â13987654321â] print (match phone numbers(phone numbers)) Elo | 2309.16609#224 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 226 | This code defines a regular expression pattern Ë139d9$. This pat- tern represents a string that starts with â139â, ends with â00â, and has a length of 11 digits. Then, we use the re.findall function to find all the strings that match this pattern, and finally, we return these matching strings.
Note that this code assumes that phone numbers is a list containing multiple strings. If your phone numbers is a string, you must first convert it to a list.
50
Qwen-14B-Chat (SFT) ä½ å¯ä»¥ä½¿ç¨Pythonçreæ¨¡åæ¥å®ç°è¿ä¸ªæ£å表达å¼å¹é
ã以 䏿¯ä¸ä¸ªç®åçä¾åï¼ 941
# import re
# def match phone numbers(phone numbers):
pattern = r âË139d9$â return [phone number for phone number in phone numbers if re .match( pattern , phone number)] | 2309.16609#226 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 228 | å¨è¿ä¸ªä¾åä¸ï¼æä»¬å®ä¹äºä¸ä¸ªæ£åè¡¨è¾¾å¼æ¨¡å¼âË139d9$âï¼ å®å¹é
以139å¼å¤´ï¼åé¢è·ç9ä¸ªä»»ææ°åï¼æå以0ç»å°¾ç å符串ãç¶åæä»¬ä½¿ç¨re .match()彿°æ¥æ£æ¥æ¯ä¸ªçµè¯å·ç æ¯å¦å¹é
è¿ä¸ªæ¨¡å¼ã妿å¹é
ï¼æä»¬å°±å°è¿ä¸ªçµè¯å·ç æ·»å å°ç»æå表ä¸ã You can use the re module of Python to implement this regular expression matching. Here is a simple example:
# import re
def match phone numbers(phone numbers): | 2309.16609#228 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 229 | # import re
def match phone numbers(phone numbers):
pattern = r âË139d9$â return [phone number for phone number in phone numbers if re .match( pattern , phone number)]
phone numbers = [â13912345678â, â13923456789â, â 13934567890â, â13945678901â, â13956789012â, â13967890123 â, â13978901234â, â13989012345â, â13990123456â, â 13900000000â] print (match phone numbers(phone numbers))
In this example, we define a regular expression pattern âË139d9$â, which matches a string starting with 139, followed by nine arbitrary digits and ending with 0. Then we use the re .match() function to check if each phone number matches this pattern. If it does, we add this phone number to the result list.
51 | 2309.16609#229 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 230 | 51
Qwen-14B-Chat (RLHF) å¯ä»¥ä½¿ç¨Pythonçreæ¨¡åæ¥å®ç°æ£å表达å¼å¹é
ã以䏿¯ä¸ 个示ä¾ä»£ç ï¼ 1090
# import re
# pattern = r âË139d9$â
# æµè¯å¹é
text = â13912345678â match = re .match( pattern , if match: text ) print ( âå¹é
æåâ ) else : print ( âå¹é
失败â )
# æµè¯ä¸å¹é
text = â139123456780â match = re .match( pattern , if match: text ) print ( âå¹é
æåâ ) else : print ( âå¹é
失败â ) | 2309.16609#230 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 232 | 卿µè¯å¹é
æ¶ï¼ä½¿ç¨re .match()彿°æ¥å°è¯å¹é
ææ¬ï¼å¦æ å¹é
æåï¼è¿åä¸ä¸ªMatch对象ï¼å¦åè¿åNoneã卿¬ä¾ ä¸ï¼ç¬¬ä¸ä¸ªæµè¯å¹é
æåï¼ç¬¬äºä¸ªæµè¯å¹é
失败ã You can use the re module in Python to implement regular expres- sion matching. Here is an example code:
# import re
pattern = r âË139d9$â
# Match Test text = â13912345678â match = re .match( pattern , if match: text ) print ( âSuccessâ ) else : print ( âFailâ )
# Unmatch Test text = â139123456780â match = re .match( pattern , if match: text ) print ( âSuccessâ ) else : print ( âFailâ ) | 2309.16609#232 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 233 | Among them, Ë represents matching the beginning of a string, d represents matching a digit, 9 represents matching 9 digits, and $ represents matching the end of a string. Combining these symbols, we can obtain a regular expression that matches 11 digits. When testing for a match, use the re .match() function to attempt to match the text. A successful match returns a Match object; otherwise, it returns None. In this example, the first test matches successfully, while the second test fails to match.
52
GPT-3.5 使ç¨Pythonçre模åå¯ä»¥å¾æ¹ä¾¿å°å®ç°æ£å表达å¼å¹é
ã以 䏿¯ä¸ä¸ªå¹é
ææä»¥139å¼å¤´ï¼ä»¥00ç»å°¾ç11使°åçç¤ºä¾ ä»£ç ï¼ 1043
# import re
text = â139123456700â pattern = râË139d800$â
# match = re .match( pattern , if match:
# text )
# print (âå¹é
æåâ)
# else :
# print (âå¹é
失败â) | 2309.16609#233 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 234 | è¾åºç»æä¸ºï¼ å¹é
æå 代ç ä¸ï¼é¦å
å®ä¹äºä¸ä¸ªææ¬å符串 text ï¼ç¨äºæµè¯æ£å表 è¾¾å¼çå¹é
ãæ¥çå®ä¹äºä¸ä¸ªæ£åè¡¨è¾¾å¼ pattern ï¼å
¶ä¸Ë139 表示以139å¼å¤´ï¼d8表示任æ8使°åï¼00$表示以00ç»å°¾ã ç¶ å 使 ç¨re .match()å½ æ° è¿ è¡ å¹ é
ï¼ å¦ æ å¹ é
æ å ï¼ å è¾ åºâå¹é
æåâï¼å¦åè¾åºâå¹é
失败âã The re module in Python can conveniently accomplish regular expression matching. Below is an example code that matches all 11-digit numbers | 2309.16609#234 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 236 | # import re
text = â139123456700â pattern = râË139d800$â
match = re .match( pattern , if match: text ) print (âSuccessâ) else : print (âFailâ)
Output: Success In the code, a text string âtextâ is first defined for testing regular expression matching. Then, a regular expression âpatternâ is de- fined, where âË139â means starting with 139, âd8â means any eight digits, and â00$â means ending with 00. Then, the re .match() function is used for matching. If the match is successful, âSuccessâ is outputted; otherwise, âFailâ is outputted.
53 | 2309.16609#236 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 237 | GPT-4 ä¸é¢æ¯ä½ éè¦çpython代ç ï¼è¿æ®µä»£ç ä¸ä½¿ç¨äºpythonçre模 åã import re def match phone number(string) : pattern = âË139d600$â return re .match( pattern , string ) is not None # æµè¯ print (match phone number(â13912345600â)) # è¾åº: True print (match phone number(â15912345600â)) # è¾åº: False print (match phone number(â13912345601â)) # è¾åº: False print (match phone number(â139123456000â)) # è¾åº: False å¨è¿éï¼âË139d600$âæ¯æ£å表达å¼ãâËâ表示å¹é
å符串ç å¼ å§ ï¼â139âå¹ é
å 符 串 ä¸ çâ139âï¼âd6âå¹ é
ä»» æ å
个 æ° | 2309.16609#237 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 238 | å 符 串 ä¸ çâ139âï¼âd6âå¹ é
ä»» æ å
个 æ° åï¼â00âå¹é
å符串ä¸çâ00âï¼â$â表示å¹é
å符串çç»æã å æ¤ï¼è¿ä¸ªæ£å表达å¼å¯ä»¥å¹é
å°ææä»¥139å¼å¤´ï¼ä»¥00ç» å°¾ç11使°åã Below is the python code you need. This code uses the python re module. import re def match phone number(string) : pattern = âË139d600$â return re .match( pattern , string ) is not None # Test print (match phone number(â13912345600â)) # Output: True print (match phone number(â15912345600â)) # Output: False print (match phone number(â13912345601â)) # Output: False print (match phone number(â139123456000â)) # Output: False 1134 | 2309.16609#238 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 240 | Code æç¨ä¸é¢ä»£ç æ±äºåæ çæå¤§æ·±åº¦ï¼ä½æ¯çæ¡ä¸å¯¹ï¼æä¹åäºå¢ï¼ d e f maxDepth ( s e l f , r o o t : TreeNode ) â> i n t : i f n o t queue , w h i l e q u e u e : tmp = [ ] f o r node i n q u e u e : r o o t : r e t u r n 0 r e s = [ r o o t ] , 0 i f node . l e f t : i f node . r i g h t : tmp . a p p e n d ( node . l e f t ) tmp . a p p e n d ( node . l e f t ) q u e u e = tmp r e s += 1 r e t u r n r e s
I use the following code to find the maximum depth of a binary tree, but the answer is incorrect. What could be the problem?
54
,
d e f maxDepth ( s e l f ,
r o o t : TreeNode ) â> i n t : | 2309.16609#240 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 241 | i f n o t queue , w h i l e q u e u e : tmp = [ ] f o r node i n q u e u e : r o o t : r e t u r n 0 r e s = [ r o o t ] , 0 i f node . l e f t : i f node . r i g h t : tmp . a p p e n d ( node . l e f t ) tmp . a p p e n d ( node . l e f t ) q u e u e = tmp r e s += 1 r e t u r n r e s Responses Model Qwen-7B-Chat (SFT) ä½ çä»£ç æ¯æ£ç¡®çï¼çæ¡æ¯æ£ç¡®çãå¨ä½ ç代ç ä¸ï¼ä½ é¦ å
å°æ ¹èç¹æ·»å å°éåä¸ï¼ç¶åè¿å
¥ä¸ä¸ªå¾ªç¯ï¼ç´å°éå为 | 2309.16609#241 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 242 | 空ã卿¯æ¬¡å¾ªç¯ä¸ï¼ä½ é½ä¼å°éåä¸çææèç¹çå·¦åèç¹ åå³åèç¹æ·»å å°éåä¸ãç¶åï¼ä½ å¢å 深度计æ°å¨resã æåï¼ä½ è¿å深度计æ°å¨resã Your code is correct, the answer is correct. In your code, you first add the root node to the queue, then enter a loop until the queue is empty. In each loop, you add the left child node and right child node of all nodes in the queue to the queue. Then, you increase the depth counter res. Finally, you return the depth counter res. Response Qwen-14B-Chat (SFT) ä½ ç代ç ä¼è¿åäºåæ çèç¹æ°éï¼è䏿¯æå¤§æ·±åº¦ãè¿æ¯ | 2309.16609#242 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 243 | å ä¸ºä½ å¨æ¯æ¬¡å¾ªç¯ä¸é½å¢å äºresçå¼ï¼è䏿¯åªå¨æ¾å°æ° çææ·±èç¹æ¶å¢å ãä½ å¯ä»¥éè¿å¨æ¾å°æ°çææ·±èç¹æ¶æå¢ å resç弿¥ä¿®å¤è¿ä¸ªé®é¢ã以䏿¯ä¿®å¤åç代ç ï¼ def maxDepth(self, root : TreeNode) â> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp if len(queue) > res : res = len(queue) return res Your code will return the number of nodes in the binary tree, not the maximum depth. This is because you increase the value of âresâ in every loop, instead of only increasing it when a new deepest node is found. You can fix this problem by increasing the value of | 2309.16609#243 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 246 | queue = tmp if len(queue) > res : res = len(queue)
# return res
55
Qwen-14B-Chat (RLHF) è¿ä¸ªä»£ç çé®é¢å¨äºå®æ²¡ææ£ç¡®å°å¤çå·¦å³åæ çæ·±åº¦ãå
· 使¥è¯´ï¼æ¯æ¬¡ä»éåä¸ååºä¸ä¸ªèç¹æ¶ï¼å®ä¼å°è¯¥èç¹çå·¦ å³åæ é½å å
¥å°éåä¸ï¼ç¶åå°éåé¿åº¦å ä¸ã使¯ï¼å¦æ 该èç¹çå·¦å³åæ 䏿ä¸ä¸ªä¸ºç©ºï¼é£ä¹è¿ä¸ªèç¹çæ·±åº¦å°±åº è¯¥æ¯å®æå¨éåçé¿åº¦åä¸ï¼è䏿¯éåé¿åº¦å ä¸ã | 2309.16609#246 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 248 | å¨è¿ä¸ªä»£ç ä¸ï¼æ¯æ¬¡ååºä¸ä¸ªèç¹æ¶ï¼æä»¬åªå°è¯¥èç¹çå·¦ å³åæ ä¸é空çé¨åå å
¥å°éåä¸ï¼å¹¶è®¡ç®éåçé¿åº¦ãç¶ åï¼æä»¬å°éåé¿åº¦ä¸å½åæå¤§æ·±åº¦è¿è¡æ¯è¾ï¼å¹¶æ´æ°æå¤§ 深度ãè¿æ ·å°±å¯ä»¥æ£ç¡®å°è®¡ç®åºäºåæ çæå¤§æ·±åº¦äºã The problem with this code is that it does not correctly handle the depth of the left and right subtrees. Specifically, when a node is taken out from the queue, it adds both its left and right subtrees to the queue and increases the length of the queue by one. However, if either the left or right subtree of the node is empty, then the depth of this node | 2309.16609#248 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 250 | Here is the modified code: def maxDepth(self, root : TreeNode) â> int:
if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res = max(res, len(queue)) return res
In this code, when each node is extracted, we only add the non- empty parts of its left and right subtrees to the queue, and calculate the length of the queue. Then, we compare the length of the queue with the current maximum depth and update the maximum depth. This way, we can correctly calculate the maximum depth of the binary tree.
56 | 2309.16609#250 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 251 | é®é¢åºå¨ç¬¬11è¡ä»£ç ï¼å¨å°node. right æ·»å å°ttmpå表æ¶ï¼ åº è¯¥ æ¯tmp.append(node. right )è ä¸ æ¯tmp.append(node. left ) ãä¿®æ¹åç代ç å¦ä¸ï¼ def maxDepth(self, root : TreeNode) â> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res The problem lies in line 11 of the code. When adding node. right to the tmp list, it should be tmp.append(node. right ) instead of tmp .append(node. left ). The modified code is as follows: def maxDepth(self, root : TreeNode) â> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res | 2309.16609#251 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 252 | [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res ä½ ç代ç 䏿ä¸ä¸ªéè¯¯ï¼ ä½ å¨è¿ä¸è¡ä»£ç if node. right : tmp.append(node. left ) ä¸åº è¯¥æ¯æ³æå³åèç¹å å
¥å°tmpå表ä¸ï¼ä½æ¯ä½ åçæ¯node. left ï¼è¿æ ·å°±åªä¼æå·¦åèç¹å å
¥å°tmpå表ä¸ãæ£ç¡®çåº è¯¥æ¯node. right ã æ£ç¡®ç代ç å¦ä¸ï¼ def maxDepth(self, root : TreeNode) â> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res | 2309.16609#252 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 253 | [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res è¿ç§è§£æ³çæè·¯æ¯ï¼æ¯æ¬¡è¿ä»£æ¶ï¼é½æå½åå±çææèç¹ éåä¸éï¼åæ¶æä¸ä¸å±çææèç¹é½å å°ä¸ä¸æ¬¡è¿ä»£çé åä¸ãæä»¥æ¯æ¬¡è¿ä»£å®ï¼æ·±åº¦reså°±å 1ãç´å°éåçéå为 ç©ºï¼æ¤æ¶ç深度就æ¯äºåæ çæå¤§æ·±åº¦ã 1090 1095 | 2309.16609#253 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 254 | GPT-3.5
# GPT-4
57
There is an error in your code:
In this line of code if node. right : tmp.append(node. left ), you should be adding the right child node to the tmp list, but you wrote node. left instead, so only the left child node will be added to the tmp list. The correct code should be node. right .
The correct code is as follows: def maxDepth(self, root : TreeNode) â> int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res
The idea of this solution is: in each iteration, traverse all the nodes in the current level, and add all the nodes in the next level to the queue for the next iteration. So after each iteration, the depth res is increased by 1. Continue this process until the queue for traversal is empty, and the depth at that time is the maximum depth of the binary tree.
A.3 ANALYSIS OF CODE INTERPRETER
Here we provide a case of comparison between CODE LLAMA and QWEN-CHAT. This case demonstrates the advantages of QWEN-CHAT in processing tabular data and performing complex tasks.
58 | 2309.16609#254 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 255 | r Question: [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Qwen Great, you've uploaded a file named âscatter_data.csv", This appears to be a CSV file, which is co mmonly used for storing tabular data, We can use this data to create a scatter plot with different size and color settings. Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the fi rst few rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_data.csv") # Display the first few rows of the dataframe df.head() Observation: execute_result: © Copy code x y sizes colors @ 7.577257 1.952430 46.433866 24.87@965 1 #4,87362@ 2.574614 38.541454 21.586782 2 4.192995 5.256499 60.999636 23.4@4101 3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964 | 2309.16609#255 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 256 | 3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964 Thought: Now | understand the data format. | need to use the code_interpreter API to run the co de Action: code_interpreter [ user huston [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the first f ew rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_date.csv") # Display the first few rows of the dataframe df. head( ) Observation: The output is: © Copy code x y sizes colors 7.577257 1.952430 46.433060 24.878965 4.873626 2.574814 38.541454 21.506782 4.192995 5.250496 62.999630 23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040 | 2309.16609#256 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 257 | 23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040 27.207964 Thought: Now that we have the data loaded, we can start thinking about how to create the scatter plot. W e'll need to decide on the x-axis and y-axis labels, as well as the size and color settings for the poi nts. Action: code_interpreter Action Input: Action Input: py © Copy code py © Copy code import matplotlib.pyplot as plt import matplotlib.pyplot as plt import pandas as pd # Set the x-axis label plt.xlabel("x") df = pd.read_csv(âscatter_data.csyv") # Set the y-axis label plt.ylabel("y") fig, ax = plt.subplots() # Set the size of the points plt.scatter(df["x"], df["y"], s=df["sizes"], c=df["colorsâ]) ax.scatter(df['x'], df['y'], sadf[âsizes'], cadf{âcolorsâ]) # Show the plot plt.show() plt.show() Observation: | 2309.16609#257 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.