id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2309.16609#37 | Qwen Technical Report | SHP PMP RM 62.68 74.78 61.62 69.71 76.52 73.98 65.43 64.57 69.60 69.99 60.05 60.10 70.59 70.52 QWEN model to extract the reward for a sentence based on a specific end token. The learning rate for this process has been set to a constant value of 3 Ã 10â 6, and the batch size is 64. Additionally, the sequence length is set to 2048, and the training process lasts for a single epoch. | 2309.16609#36 | 2309.16609#38 | 2309.16609 | [
"2305.20050"
] |
2309.16609#38 | Qwen Technical Report | We adopted the accuracy on the test dataset as an important but not exclusive evaluation metric for the reward model. In Table 4, we report the test pairwise accuracy of PMP and reward models on diverse human preference benchmark datasets (Bai et al., 2022b; Stiennon et al., 2020; Ethayarajh et al., 2022; Lightman et al., 2023). Specifically, QWEN Helpful-base and QWEN Helpful-online are our proprietary datasets. The responses in QWEN Helpful-base are generated from QWEN without RLHF, whereas QWEN Helpful-online includes responses from QWEN with RLHF. The results show that the PMP model demonstrates high generalization capabilities on out-of-distribution data, and the reward model demonstrates significant improvement on our QWEN reward datasets. 3.2.2 REINFORCEMENT LEARNING Our Proximal Policy Optimization (PPO) process involves four models: the policy model, value model, reference model, and reward model. Before starting the PPO procedure, we pause the policy modelâ s updates and focus solely on updating the value model for 50 steps. This approach ensures that the value model can adapt to different reward models effectively. | 2309.16609#37 | 2309.16609#39 | 2309.16609 | [
"2305.20050"
] |
2309.16609#39 | Qwen Technical Report | During the PPO operation, we use a strategy of sampling two responses for each query simultaneously. This strategy has proven to be more effective based on our internal benchmarking evaluations. We set the KL divergence coefficient to 0.04 and normalize the reward based on the running mean. The policy and value models have learning rates of 1 Ã 10â 6 and 5 Ã 10â 6, respectively. To enhance training stability, we utilize value loss clipping with a clip value of 0.15. For inference, the policy top-p is set to 0.9. Our findings indicate that although the entropy is slightly lower than when top-p is set to 1.0, there is a faster increase in reward, ultimately resulting in consistently higher evaluation rewards under similar conditions. Additionally, we have implemented a pretrained gradient to mitigate the alignment tax. Empirical findings indicate that, with this specific reward model, the KL penalty is adequately robust to counteract the alignment tax in benchmarks that are not strictly code or math in nature, such as those that test common sense knowledge and reading comprehension. It is imperative to utilize a significantly larger volume of the pretrained data in comparison to the PPO data to ensure the effectiveness of the pretrained gradient. Additionally, our empirical study suggests that an overly large value for this coefficient can considerably impede the alignment to the reward model, eventually compromising the ultimate alignment, while an overly small value would only have a marginal effect on alignment tax reduction. 3.3 AUTOMATIC AND HUMAN EVALUATION OF ALIGNED MODELS To showcase the effectiveness of our aligned models, we conduct a comparison with other aligned models on well-established benchmarks, including MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), HumanEval (Chen et al., 2021), and BBH (Suzgun et al., 2022). Besides the widely used few-shot setting, we test our aligned models in the zero-shot setting to demonstrate how well the models follow instructions. The prompt in a zero-shot setting consists of an instruction and a question without any previous examples in the context. | 2309.16609#38 | 2309.16609#40 | 2309.16609 | [
"2305.20050"
] |
2309.16609#40 | Qwen Technical Report | The results of the baselines are collected from their official reports and OpenCompass (OpenCompass Team, 2023). The results in Table 5 demonstrate the effectiveness of our aligned models in understanding human instructions and generating appropriate responses. QWEN-14B-Chat outperforms all other models 11 Table 5: Performance of aligned models on widely-used benchmarks. We report both zero-shot and few-shot performance of the models. Model Params MMLU 0-shot / 5-shot C-Eval 0-shot / 5-shot GSM8K 0-shot / 8-shot HumanEval 0-shot BBH 0-shot / 3-shot Proprietary models GPT-3.5 GPT-4 - - - - / 69.1 / 83.0 - - / 52.5 / 69.9 - - / 78.2 / 91.4 73.2 86.6 - - / 70.1 / 86.7 Open-source models ChatGLM2 6B 45.5 / 46.0 50.1 / 52.6 - / 28.8 11.0 - / 32.7 InternLM-Chat 7B - / 51.1 - / 53.6 - / 33.0 14.6 - / 32.5 Baichuan2-Chat 7B 13B - - / 52.9 / 57.3 - - / 55.6 / 56.7 - - / 32.8 / 55.3 13.4 17.7 - - / 35.8 / 49.9 LLAMA 2-CHAT 7B 13B 70B - - - / 46.2 / 54.6 / 63.8 - - - / 31.9 / 36.2 / 44.3 - - - / 26.3 / 37.1 / 59.3 12.2 18.9 32.3 - - - / 35.6 / 40.1 / 60.8 QWEN-CHAT 1.8B 7B 14B 42.4 / 43.9 55.8 / 57.0 64.6 / 66.5 50.7 / 50.3 59.7 / 59.3 69.8 / 71.7 27.8 / 19.5 50.3 / 54.1 60.1 / 59.3 14.6 37.2 43.9 27.1 / 25.0 39.6 / 46.7 46.9 / 58.7 | 2309.16609#39 | 2309.16609#41 | 2309.16609 | [
"2305.20050"
] |
2309.16609#41 | Qwen Technical Report | except ChatGPT (OpenAI, 2022) and LLAMA 2-CHAT-70B (Touvron et al., 2023b) in all datasets, including MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), HumanEval (Chen et al., 2021), and BBH (Suzgun et al., 2022). In particular, QWENâ s performance in HumanEval, which measures the quality of generated codes, is significantly higher than that of other open-source models. Moreover, QWENâ s performance is consistently better than that of open-source models of similar size, such as LLaMA2 (Touvron et al., 2023b), ChatGLM2 (ChatGLM2 Team, 2023), InternLM (InternLM Team, 2023), and Baichuan2 (Yang et al., 2023). This suggests that our alignment approach, which involves fine-tuning the model on a large dataset of human conversations, has been effective in improving the modelâ s ability to understand and generate human-like language. Despite this, we have reservations about the ability of traditional benchmark evaluation to accurately measure the performance and potential of chat models trained with alignment techniques in todayâ | 2309.16609#40 | 2309.16609#42 | 2309.16609 | [
"2305.20050"
] |
2309.16609#42 | Qwen Technical Report | s landscape. The results mentioned earlier provide some evidence of our competitive standing, but we believe that it is crucial to develop new evaluation methods specifically tailored to aligned models. We believe that human evaluation is crucial, which is why we have created a carefully curated dataset for this purpose. Our process involved collecting 300 instructions in Chinese that covered a wide range of topics, including knowledge, language understanding, creative writing, coding, and mathematics. To evaluate the performance of different models, we chose the SFT version of QWEN-CHAT-7B and the SFT and RLHF versions of QWEN-CHAT-14B, and added two strong baselines, GPT-3.5 and GPT-44, for comparison. For each instruction, we asked three annotators to rank the model responses by the overall score of helpfulness, informativeness, validity, and other relevant factors. Our dataset and evaluation methodology provides a comprehensive and rigorous assessment of the capabilities of different language models in various domains. Figure 4 illustrates the win rates of the various models. For each model, we report the percentage of wins, ties, and losses against GPT-3.5, with the segments of each bar from bottom to top representing these statistics. The experimental results clearly demonstrate that the RLHF model outperforms the SFT models by significant margins, indicating that RLHF can encourage the model to generate responses that are more preferred by humans. In terms of overall performance, we find that the RLHF model significantly outperforms the SFT models, falling behind GPT-4. This indicates the effectiveness of RLHF for aligning to human preference. To provide a more comprehensive understanding of the modelsâ performance, we include a case study with examples from different models in Appendix A.2.2. Nonetheless, it remains difficult to accurately capture the gap between our | 2309.16609#41 | 2309.16609#43 | 2309.16609 | [
"2305.20050"
] |
2309.16609#43 | Qwen Technical Report | 4To obtain the results from the models, we use the OpenAI APIs of GPT-3.5-turbo-0613 and GPT-4-0613. 12 # Winrate (v.s. GPT-3.5) # I Qwen-7B-Chat (SFT) # [NE # Qwen-14B-Chat (SFT) # ME # Qwen-14B-Chat (RLHF) # mmm % ao AANA ® 9 6 2? 2 rt at opt Sor wp > Ao ah Oo aA 9 Ar > we? A> ag? aps LP DF 2 oP a? Ow 2? arâ a | 2309.16609#42 | 2309.16609#44 | 2309.16609 | [
"2305.20050"
] |
2309.16609#44 | Qwen Technical Report | # Average Knowledge Language Understanding Creative Writing # Math # Coding Figure 4: Results of the human evaluation for chat models. We compare Qwen-7B (SFT), Qwen- 14B (SFT), Qwen-14B (RLHF), as well as GPT-4 against GPT-3.5. Each bar segment represents the percentage of wins, ties, and losses, from bottom to top. On average, the RLHF model outperforms the SFT model. | 2309.16609#43 | 2309.16609#45 | 2309.16609 | [
"2305.20050"
] |
2309.16609#45 | Qwen Technical Report | The dataset consists of 300 Chinese instructions. models and the proprietary models. As such, a more extensive and rigorous assessment is required for the chat models. 3.4 TOOL USE, CODE INTERPRETER, AND AGENT Table 6: Performance of QWEN on the in-house Chinese benchmark that evaluates its ability to use unseen tools via ReAct prompting. Model Params Tool Selection (Acc.â ) Tool Input (Rouge-Lâ ) GPT-4 - 95 90 15.0 GPT-3.5 - 85 88 75.0 QWEN-CHAT 1.8B 7B 14B 92 98 98 89 91 93 19.3 7.3 2.4 The QWEN models, which are designed to be versatile, have the remarkable ability to assist with (semi-)automating daily tasks by leveraging their skills in tool-use and planning. | 2309.16609#44 | 2309.16609#46 | 2309.16609 | [
"2305.20050"
] |
2309.16609#46 | Qwen Technical Report | As such, they can serve as agents or copilots to help streamline various tasks. We explore QWENâ s proficiency in the following areas: â ¢ Utilizing unseen tools through ReAct prompting (Yao et al., 2022) (see Table 6). â ¢ Using a Python code interpreter to enhance math reasoning, data analysis, and more (see Table 7 and Table 8). â ¢ Functioning as an agent that accesses Hugging Faceâ s extensive collection of multimodal models while engaging with humans (see Table 9). | 2309.16609#45 | 2309.16609#47 | 2309.16609 | [
"2305.20050"
] |
2309.16609#47 | Qwen Technical Report | 13 # GPT-4 Table 7: The proportion of code generated by QWEN that is executable on the in-house evaluation benchmark for Code Interpreter. This benchmark examines QWENâ s coding proficiency in math problem solving, data visualization, and general purposes. CODE LLAMA underperforms on visualization tasks because it hallucinates non-existent columns solely based on CSV file names (see Figure 5). Model Params Category Math (%) Visualization (%) General (%) All (%) GPT-4 - 91.9 85.9 82.8 86.8 GPT-3.5 - 89.2 65.0 74.1 72.9 LLAMA 2-CHAT 7B 13B 41.9 50.0 33.1 40.5 24.1 48.3 33.6 44.4 CODE LLAMA-INSTRUCT 7B 13B 85.1 93.2 54.0 55.8 70.7 74.1 65.1 68.8 InternLM-Chat 7B v1.1 20B 78.4 70.3 44.2 44.2 62.1 65.5 56.3 54.9 QWEN-CHAT 1.8B 7B 14B 33.8 82.4 89.2 30.1 64.4 84.1 8.6 67.2 65.5 26.8 70.2 81.7 Table 8: Correctness of the final response on the in-house evaluation benchmark for Code Interpreter. | 2309.16609#46 | 2309.16609#48 | 2309.16609 | [
"2305.20050"
] |
2309.16609#48 | Qwen Technical Report | Visualization-Hard tasks involve planning multiple steps, while Visualization-Easy tasks do not. Visualization-All measures both types of tasks. CODE LLAMA excels in performing Visualization- Easy tasks but tends to underperform in Visualization-Hard tasks, due to its inclination to hallucinate non-existent columns based on the name of a CSV file (see Figure 5). Model Params Category GPT-4 - 82.8 66.7 60.8 63.8 GPT-3.5 - 47.3 33.3 55.7 44.2 LLAMA 2-CHAT 7B 13B 3.9 8.3 14.3 8.3 39.2 40.5 26.4 23.9 CODE LLAMA-INSTRUCT 7B 13B 14.3 28.2 26.2 27.4 60.8 62.0 42.9 44.2 InternLM-Chat 7B v1.1 20B 28.5 34.6 4.8 21.4 40.5 45.6 22.1 33.1 QWEN-CHAT 1.8B 7B 14B 14.7 41.9 58.4 3.6 40.5 53.6 20.3 54.4 59.5 11.7 47.2 56.4 | 2309.16609#47 | 2309.16609#49 | 2309.16609 | [
"2305.20050"
] |
2309.16609#49 | Qwen Technical Report | 14 Table 9: Results of QWEN-Chat on the Hugging Face Agent benchmark. Task Model Params Tool Selection â Metric GPT-4 - 100 100 97.4 GPT-3.5 - 95.4 96.3 87.0 Run Mode Starcoder-Base 15B 86.1 87.0 68.9 Starcoder 15B 87.0 88.0 68.9 QWEN-CHAT 1.8B 7B 14B 85.2 87.0 93.5 84.3 87.0 94.4 61.1 71.5 87.0 GPT-4 - 97.9 97.9 98.5 GPT-3.5 - 97.3 96.8 89.6 Chat Mode Starcoder-Base 15B 97.9 97.9 91.1 Starcoder 15B 97.9 97.9 89.6 QWEN-CHAT 1.8B 7B 14B 93.6 94.7 97.9 93.6 94.7 97.9 73.2 85.1 95.5 | 2309.16609#48 | 2309.16609#50 | 2309.16609 | [
"2305.20050"
] |
2309.16609#50 | Qwen Technical Report | To enhance QWENâ s capabilities as an agent or copilot, we employ the self-instruct (Wang et al., 2023c) strategy for SFT. Specifically, we utilize the in-context learning capability of QWEN for self-instruction. By providing a few examples, we can prompt QWEN to generate more relevant queries and generate outputs that follow a specific format, such as ReAct (Yao et al., 2022). We then apply rules and involve human annotators to filter out any noisy samples. Afterwards, the samples are incorporated into QWENâ s training data, resulting in an updated version of QWEN that is more dependable for self-instruction. We iterate through this process multiple times until we gather an ample number of samples that possess both exceptional quality and a wide range of diversity. As a result, our final collection consists of around 2000 high-quality samples. During the finetuning process, we mix these high-quality samples with all the other general-purpose SFT samples, rather than introducing an additional training stage. By doing so, we are able to retain essential general-purpose capabilities that are also pertinent for constructing agent applications. | 2309.16609#49 | 2309.16609#51 | 2309.16609 | [
"2305.20050"
] |
2309.16609#51 | Qwen Technical Report | Using Tools via ReAct Prompting We have created and made publicly available a benchmark for evaluating QWENâ s ability to call plugins, tools, functions, or APIs using ReAct Prompting (see Qwen Team, Alibaba Group, 2023b). To ensure fair evaluation, we have excluded any plugins that were included in QWENâ s training set from the evaluation set. The benchmark assesses the modelâ s accuracy in selecting the correct plugin from a pool of up to five candidates, as well as the plausibility of the parameters passed into the plugin and the frequency of false positives. In this evaluation, a false positive occurs when the model incorrectly invokes a plugin in response to a query, despite not being required to do so. The results presented in Table 6 demonstrate that QWEN consistently achieves higher accuracy in identifying the relevance of a query to the available tools as the model size increases. However, the table also highlights that beyond a certain point, there is little improvement in performance when it comes to selecting the appropriate tool and providing relevant arguments. This suggests that the current preliminary benchmark may be relatively easy and may require further enhancement in future iterations. It is worth noting that GPT-3.5 stands out as an exception, displaying suboptimal performance on this particular benchmark. This could potentially be attributed to the fact that the benchmark primarily focuses on the Chinese language, which may not align well with GPT-3.5â | 2309.16609#50 | 2309.16609#52 | 2309.16609 | [
"2305.20050"
] |
2309.16609#52 | Qwen Technical Report | s capabilities. Additionally, we observe that GPT-3.5 tends to attempt to use at least one tool, even if the query cannot be effectively addressed by the provided tools. Using Code Interpreter for Math Reasoning and Data Analysis The Python code interpreter It is is widely regarded as a powerful tool for augmenting the capabilities of an LLM agent. 15 worth investigating whether QWEN can harness the full potential of this interpreter to enhance its performance in diverse domains, such as mathematical reasoning and data analysis. To facilitate this exploration, we have developed and made publicly available a benchmark that is specifically tailored for this purpose (see Qwen Team, Alibaba Group, 2023a). The benchmark encompasses three primary categories of tasks: math problem-solving, data visu- alization, and other general-purpose tasks like file post-processing and web crawling. Within the visualization tasks, we differentiate between two levels of difficulty. The easier level can be achieved by simply writing and executing a single code snippet without the need for advanced planning skills. However, the more challenging level requires strategic planning and executing multiple code snippets in a sequential manner. This is because the subsequent code must be written based on the output of the previous code. For example, an agent may need to examine the structure of a CSV file using one code snippet before proceeding to write and execute additional code to create a plot. Regarding evaluation metrics, we consider both the executability and correctness of the generated code. To elaborate on the correctness metrics, for math problems, we measure accuracy by verifying if the ground truth numerical answer is present in both the code execution result and the final response. When it comes to data visualization, we assess accuracy by utilizing QWEN-VL (Bai et al., 2023), a powerful multimodal language model. QWEN-VL is capable of answering text questions paired with images, and we rely on it to confirm whether the image generated by the code fulfills the userâ | 2309.16609#51 | 2309.16609#53 | 2309.16609 | [
"2305.20050"
] |
2309.16609#53 | Qwen Technical Report | s request. The results regarding executability and correctness are presented in Table 7 and Table 8, respectively. It is evident that CODE LLAMA generally outperforms LLAMA 2, its generalist counterpart, which is not surprising since this benchmark specifically requires coding skills. However, it is worth noting that specialist models that are optimized for code synthesis do not necessarily outperform generalist models. This is due to the fact that this benchmark encompasses various skills beyond coding, such as abstracting math problems into equations, understanding language-specified constraints, and responding in the specified format such as ReAct. Notably, QWEN-7B-CHAT and QWEN-14B-CHAT surpass all other open-source alternatives of similar scale significantly, despite being generalist models. | 2309.16609#52 | 2309.16609#54 | 2309.16609 | [
"2305.20050"
] |
2309.16609#54 | Qwen Technical Report | Serving as a Hugging Face Agent Hugging Face provides a framework called the Hugging Face Agent or Transformers Agent (Hugging Face, 2023), which empowers LLM agents with a curated set of multimodal tools, including speech recognition and image synthesis. This framework allows an LLM agent to interact with humans, interpret natural language commands, and employ the provided tools as needed. To evaluate QWENâ s effectiveness as a Hugging Face agent, we utilized the evaluation benchmarks offered by Hugging Face. | 2309.16609#53 | 2309.16609#55 | 2309.16609 | [
"2305.20050"
] |
2309.16609#55 | Qwen Technical Report | The results are presented in Table 9. The evaluation results reveal that QWEN performs quite well in comparison to other open-source alternatives, only slightly behind the proprietary GPT-4, demonstrating QWENâ s competitive capabilities. # 4 CODE-QWEN: SPECIALIZED MODEL FOR CODING Training on domain-specific data has been shown to be highly effective, particularly in the case of code pretraining and finetuning. A language model that has been reinforced with training on code data can serve as a valuable tool for coding, debugging, and interpretation, among other tasks. In this work, we have developed a series of generalist models using pretraining and alignment techniques. Building on this foundation, we have created domain-specific models for coding by leveraging the base language models of QWEN, including continued pretrained model, CODE-QWEN and supervised finetuned model, CODE-QWEN-CHAT. Both models have 14 billion and 7 billion parameters versions. 4.1 CODE PRETRAINING We believe that relying solely on code data for pretraining can result in a significant loss of the ability to function as a versatile assistant. Unlike previous approaches that focused solely on pretraining on code data (Li et al., 2022; 2023d), we take a different approach (Rozi`ere et al., 2023) by starting with our base models QWEN trained on a combination of text and code data, and then continuing to | 2309.16609#54 | 2309.16609#56 | 2309.16609 | [
"2305.20050"
] |
2309.16609#56 | Qwen Technical Report | 16 pretrain on the code data. We continue to pretrain the models on a total of around 90 billion tokens. During the pre-training phase, we initialize the model using the base language models QWEN. Many applications that rely on specialized models for coding may encounter lengthy contextual scenarios, such as tool usage and code interpretation, as mentioned in Section 3.4. To address this issue, we train our models with context lengths of up to 8192. Similar to base model training in Section 2.4, we employ Flash Attention (Dao et al., 2022) in the attention modules, and adopt the standard optimizer AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017), setting β1 = 0.9, β2 = 0.95, and ϵ = 10â | 2309.16609#55 | 2309.16609#57 | 2309.16609 | [
"2305.20050"
] |
2309.16609#57 | Qwen Technical Report | 8. We set the learning rate as 6.0 à 10â 5 for CODE-QWEN-14B and 3.0 à 10â 5 for CODE-QWEN-7B, with 3% warm up iterations and no learning rate decays. 4.2 CODE SUPERVISED FINE-TUNING After conducting a series of empirical experiments, we have determined that the multi-stage SFT strategy yields the best performance compared to other methods. In the supervised fine-tuning stage, the model CODE-QWEN-CHAT initialized by the code foundation model CODE-QWEN are optimized by the AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017) optimizer (β1 = 0.9, β2 = 0.95, ϵ = 10â 8) with a learning rate of 2.0 à 10â 6 and 1.0 à 10â 5 for the 14B and 7B model respectively. | 2309.16609#56 | 2309.16609#58 | 2309.16609 | [
"2305.20050"
] |
2309.16609#58 | Qwen Technical Report | The learning rate increases to the peaking value with the cosine learning rate schedule (3% warm-up steps) and then remains constant. 4.3 EVALUATION Our CODE-QWEN models have been compared with both proprietary and open-source language models, as shown in Tables 10 and 11. These tables present the results of our evaluation on the test sets of Humaneval (Chen et al., 2021), MBPP (Austin et al., 2021), and the multi-lingual code generation benchmark HUMANEVALPACK (Muennighoff et al., 2023). The comparison is based on the pass@1 performance of the models on these benchmark datasets. The results of this comparison are clearly demonstrated in Tables 10 and 11. Our analysis reveals that specialized models, specifically CODE-QWEN and CODE-QWEN-CHAT, sig- nificantly outperform previous baselines with similar parameter counts, such as OCTOGEEX (Muen- nighoff et al., 2023), InstructCodeT5+ (Wang et al., 2023d), and CodeGeeX2 (Zheng et al., 2023). In fact, these models even rival the performance of larger models like Starcoder (Li et al., 2023d). When compared to some of the extremely large-scale closed-source models, CODE-QWEN and CODE- QWEN-CHAT demonstrate clear advantages in terms of pass@1. However, it is important to note that these models fall behind the state-of-the-art methods, such as GPT-4, in general. Nonetheless, with the continued scaling of both model size and data size, we believe that this gap can be narrowed in the near future. It is crucial to emphasize that the evaluations mentioned previously are insufficient for grasping the full extent of the strengths and weaknesses of the models. In our opinion, it is necessary to develop more rigorous tests to enable us to accurately assess our relative performance in comparison to GPT-4. | 2309.16609#57 | 2309.16609#59 | 2309.16609 | [
"2305.20050"
] |
2309.16609#59 | Qwen Technical Report | # 5 MATH-QWEN: SPECIALIZED MODEL FOR MATHEMATICS REASONING We have created a mathematics-specialized model series called MATH-QWEN-CHAT, which is built on top of the QWEN pretrained language models. Specifically, we have developed assistant models that are specifically designed to excel in arithmetic and mathematics and are aligned with human behavior. We are releasing two versions of this model series, MATH-QWEN-14B-CHAT and MATH-QWEN-7B-CHAT, which have 14 billion and 7 billion parameters, respectively. | 2309.16609#58 | 2309.16609#60 | 2309.16609 | [
"2305.20050"
] |
2309.16609#60 | Qwen Technical Report | 5.1 TRAINING We carry out math SFT on our augmented math instructional dataset for mathematics reasoning, and therefore we obtain the chat model, MATH-QWEN-CHAT, directly. Owing to shorter average lengths of the math SFT data, we use a sequence length of 1024 for faster training. Most user inputs in the math SFT dataset are examination questions, and it is easy for the model to predict the input 17 Table 10: Results of pass@1 (%) on HumanEval and MBPP. Most scores are retrieved from the papers of StarCoder (Li et al., 2023d), CodeT5+ (Wang et al., 2023d), WizardCoder (Luo et al., 2023b) and CODE LLAMA (Rozi`ere et al., 2023). | 2309.16609#59 | 2309.16609#61 | 2309.16609 | [
"2305.20050"
] |
2309.16609#61 | Qwen Technical Report | Model Params HumanEval MBPP Proprietary models PaLM 540B 26.2 36.8 PaLM-Coder 540B 36.0 47.0 PaLM 2-S - 37.6 50.0 Code-Cushman-001 - 33.5 45.9 Code-Davinci-002 - 47.0 58.1 GPT-3.5 - 73.2 - GPT-4 - 86.6 - Open-source models LLAMA 2 7B 13B 34B 70B 12.2 20.1 22.6 30.5 20.8 27.6 33.8 45.4 CodeGen-Multi 16B 18.3 20.9 CodeGen-Mono 16B 29.3 35.3 CodeGeeX2 6B 35.9 - StarCoder-Prompted 15B 40.8 49.5 CodeT5+ 16B 30.9 - InstructCodeT5+ 16B 35.0 - CODE LLAMA 7B 13B 34B 33.5 36.0 48.8 41.4 47.0 55.0 CODE LLAMA-INSTRUCT 7B 13B 34B 34.8 42.7 41.5 44.4 49.4 57.0 CODE LLAMA-PYTHON 7B 13B 34B 38.4 43.3 53.7 47.6 49.0 56.2 UNNATURAL CODE LLAMA 34B 62.2 61.2 WizardCoder-Python 13B 34B 64.0 73.2 55.6 61.2 QWEN-CHAT 7B 14B 37.2 43.9 35.8 46.4 CODE-QWEN 7B 14B 40.2 45.1 41.8 51.4 CODE-QWEN-CHAT 18 | 2309.16609#60 | 2309.16609#62 | 2309.16609 | [
"2305.20050"
] |
2309.16609#62 | Qwen Technical Report | Table 11: Zero-shot pass@1 (%) performance on the HUMANEVALPACK (synthesize) bench- mark. The baseline results are partly from OCTOPACK (Muennighoff et al., 2023). Model Params Python Programming Language JavaScript Java Go C++ Rust Avg. Proprietary models GPT-4 - 86.6 82.9 81.7 72.6 78.7 67.1 78.3 Open-source models InstructCodeT5+ 16B 37.0 18.9 17.4 9.5 19.8 0.3 17.1 StarChat-β 15B 33.5 31.4 26.7 25.5 26.6 14.0 26.3 StarCoder 15B 33.6 30.8 30.2 17.6 31.6 21.8 27.6 CodeGeeX2 6B 35.9 32.2 30.8 22.5 29.3 18.1 28.1 OCTOGEEX 6B 44.7 33.8 36.9 21.9 32.3 15.7 30.9 OCTOCODER 15B 46.2 39.2 38.2 30.4 35.6 23.4 35.5 WizardCoder 15B 59.8 49.5 36.1 36.4 40.9 20.2 40.5 QWEN-CHAT 7B 14B 37.2 43.9 23.2 38.4 32.9 42.7 20.7 34.1 22.0 24.4 9.1 18.9 24.2 33.7 CODE-QWEN 7B 14B 40.2 45.1 40.4 51.8 40.2 57.3 26.2 39.6 20.7 18.2 15.8 20.7 30.6 38.8 CODE-QWEN-CHAT 7B 14B 43.3 66.4 41.5 58.5 49.4 56.1 29.3 47.6 32.9 54.2 20.1 28.7 36.1 51.9 | 2309.16609#61 | 2309.16609#63 | 2309.16609 | [
"2305.20050"
] |
2309.16609#63 | Qwen Technical Report | Table 12: Results of models on mathematical reasoning. We report the accuracy of QWEN for all benchmarks using greedy decoding. For MATH, we are reporting QWENâ s performances on the test set from Lightman et al. (2023). Model Proprietary models GPT-4 - 92.0 42.5 83.5 74.0 GPT-3.5 - 80.8 34.1 75.1 60.0 Minerva 8B 62B 540B 16.2 52.4 58.8 14.1 27.6 33.6 - - - - - - Open-source models LLaMA-1 RFT 7B 13B 46.5 52.1 5.2 5.1 - - - - WizardMath 7B 13B 70B 54.9 63.9 81.6 10.7 14.0 22.7 - - - - - - GAIRMath-Abel 7B 13B 70B 59.7 66.4 83.6 13.0 17.3 28.3 - - - - - - QWEN-CHAT 7B 14B 50.3 60.1 6.8 18.4 57.4 70.1 51.2 67.0 MATH-QWEN-CHAT 7B 14B 62.5 69.8 17.2 24.2 80.8 85.0 75.4 78.4 | 2309.16609#62 | 2309.16609#64 | 2309.16609 | [
"2305.20050"
] |
2309.16609#64 | Qwen Technical Report | 19 format and it is meaningless for the model to predict the input condition and numbers which could be random. Thus, we mask the inputs of the system and user to avoid loss computation on them and find masking them accelerates the convergence during our preliminary experiments. For optimization, we use the AdamW optimizer with the same hyperparameters of SFT except that we use a peak learning rate of 2 Ã 10â 5 and a training step of 50 000. # 5.2 EVALUATION | 2309.16609#63 | 2309.16609#65 | 2309.16609 | [
"2305.20050"
] |
2309.16609#65 | Qwen Technical Report | We evaluate models on the test sets of GSM8K (Grade school math) (Cobbe et al., 2021), MATH (Challenging competition math problems) (Hendrycks et al., 2021), Math401 (Arithmetic abil- ity) (Yuan et al., 2023b), and Math23K (Chinese grade school math) (Wang et al., 2017). We compare MATH-QWEN-CHAT with proprietary models ChatGPT and Minerva (Lewkowycz et al., 2022) and open-sourced math-specialized model RFT (Yuan et al., 2023a), WizardMath (Luo et al., 2023a), and GAIRMath-Abel (Chern et al., 2023a) in Table 12. MATH-QWEN-CHAT models show better math reasoning and arithmetic abilities compared to open-sourced models and QWEN-CHAT models of similar sizes. Compared to proprietary models, MATH-QWEN-7B-CHAT outperforms Minerva-8B in MATH. MATH-QWEN-14B-CHAT is chasing Minerva-62B and GPT-3.5 in GSM8K and MATH and delivers better performance on arithmetic ability and Chinese math problems. | 2309.16609#64 | 2309.16609#66 | 2309.16609 | [
"2305.20050"
] |
2309.16609#66 | Qwen Technical Report | # 6 RELATED WORK 6.1 LARGE LANGUAGE MODELS The excitement of LLM began with the introduction of the Transformer architecture (Vaswani et al., 2017), which was then applied to pretraining large-scale data by researchers such as Radford et al. (2018); Devlin et al. (2018); Liu et al. (2019). These efforts led to significant success in transfer learning, with model sizes growing from 100 million to over 10 billion parameters (Raffel et al., 2020; Shoeybi et al., 2019). In 2020, the release of GPT-3, a massive language model that is 10 times larger than T5, demonstrated the incredible potential of few-shot and zero-shot learning through prompt engineering and in-context learning, and later chain-of-thought prompting (Wei et al., 2022c). This success has led to a number of studies exploring the possibilities of further scaling these models (Scao et al., 2022; Zhang et al., 2022; Du et al., 2021; Zeng et al., 2022; Lepikhin et al., 2020; Fedus et al., 2022; Du et al., 2022; Black et al., 2022; Rae et al., 2021; Hoffmann et al., 2022; Chowdhery et al., 2022; Thoppilan et al., 2022). As a result, the community has come to view these large language models as essential foundations for downstream models (Bommasani et al., 2021). The birth of ChatGPT (OpenAI, 2022) and the subsequent launch of GPT-4 (OpenAI, 2023) marked two historic moments in the field of artificial intelligence, demonstrating that large language models (LLMs) can serve as effective AI assistants capable of communicating with humans. These events have sparked interests among researchers and developers in building language models that are aligned with human values and potentially even capable of achieving artificial general intelligence (AGI) (Anil et al., 2023; Anthropic, 2023a;b). | 2309.16609#65 | 2309.16609#67 | 2309.16609 | [
"2305.20050"
] |
2309.16609#67 | Qwen Technical Report | One notable development in this area is the emergence of open-source LLMs, specifically LLaMA (Touvron et al., 2023a) and LLAMA 2 (Touvron et al., 2023b), which have been recognized as the most powerful open-source language models ever created. This has led to a surge of activity in the open-source community (Wolf et al., 2019), with a series of large language models being developed collaboratively to build upon this progress (Mosaic ML, 2023; Almazrouei et al., 2023; ChatGLM2 Team, 2023; Yang et al., 2023; InternLM Team, 2023). | 2309.16609#66 | 2309.16609#68 | 2309.16609 | [
"2305.20050"
] |
2309.16609#68 | Qwen Technical Report | 6.2 ALIGNMENT The community was impressed by the surprising effectiveness of alignment on LLMs. Previously, LLMs without alignment often struggle with issues such as repetitive generation, hallucination, and deviation from human preferences. Since 2021, researchers have been diligently working on developing methods to enhance the performance of LLMs in downstream tasks (Wei et al., 2022a; Sanh et al., 2021; Longpre et al., 2023; Chung et al., 2022; Muennighoff et al., 2022). | 2309.16609#67 | 2309.16609#69 | 2309.16609 | [
"2305.20050"
] |
2309.16609#69 | Qwen Technical Report | Furthermore, 20 researchers have been actively exploring ways to align LLMs with human instructions (Ouyang et al., 2022; Askell et al., 2021; Bai et al., 2022b;c). One major challenge in alignment research is the difficulty of collecting data. While OpenAI has utilized its platform to gather human prompts or instructions, it is not feasible for others to collect such data. However, there has been some progress in this area, such as the self-instruct approach proposed in Wang et al. (2023c). This innovative work offers a potential solution to the data collection problem in alignment research. As a result, there has been a surge in open-source chat data, including Alpaca (Taori et al., 2023), MOSS (Sun et al., 2023a), Dolly (Conover et al., 2023), Evol-Instruct (Xu et al., 2023b), and others (Sun et al., 2023b; Xu et al., 2023a;c; Chen et al., 2023c; Ding et al., 2023; Ji et al., 2023; Yang, 2023). Similarly, there has been an increase in open-source chat models, such as Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Guanaco (Dettmers et al., 2023), MOSS (Sun et al., 2023a), WizardLM (Xu et al., 2023b), and others (Xu et al., 2023c; Chen et al., 2023c; Ding et al., 2023; Wang et al., 2023b). To train an effective chat model, available solutions are mostly based on SFT and RLHF (Ouyang et al., 2022). While SFT is similar to pretraining, it focuses on instruction following using the aforementioned data. However, for many developers, the limited memory capacity is a major obstacle to further research in SFT. As a result, parameter-efficient tuning methods, such as LoRA (Hu et al., 2021) and Q-LoRA (Dettmers et al., 2023), have gained popularity in the community. | 2309.16609#68 | 2309.16609#70 | 2309.16609 | [
"2305.20050"
] |
2309.16609#70 | Qwen Technical Report | LoRA tunes only low-rank adapters, while Q-LoRA builds on LoRA and utilizes 4-bit quantized LLMs and paged attention (Dettmers et al., 2022; Frantar et al., 2022; Kwon et al., 2023). In terms of RLHF, recent methods such as PPO (Schulman et al., 2017; Touvron et al., 2023b) have been adopted, but there are also alternative techniques aimed at addressing the complexity of optimization, such as RRHF (Yuan et al., 2023c), DPO (Rafailov et al., 2023), and PRO (Song et al., 2023). Despite the ongoing debate about the effectiveness of RLHF, more evidence is needed to understand how it enhances the intelligence of LLMs and what potential drawbacks it may have. | 2309.16609#69 | 2309.16609#71 | 2309.16609 | [
"2305.20050"
] |
2309.16609#71 | Qwen Technical Report | 6.3 TOOL USE AND AGENTS LLMâ s planning function allows for the invocation of tools, such as APIs or agent capabilities, through in-context learning, as demonstrated by Schick et al. (2023). Yao et al. (2022) introduced ReAct, a generation format that enables the model to generate thoughts on which tool to use, accept input from API observations, and generate a response. GPT-3.5 and GPT-4, when prompted with few shots, have shown consistent and impressive performance. In addition to tool usage, LLMs can utilize external memory sources like knowledge bases (Hu et al., 2023; Zhong et al., 2023b) or search engines (Nakano et al., 2021; Liu et al., 2023b) to generate more accurate and informative answers. | 2309.16609#70 | 2309.16609#72 | 2309.16609 | [
"2305.20050"
] |
2309.16609#72 | Qwen Technical Report | This has led to the popularity of frameworks like LangChain (LangChain, Inc., 2023). The research on LLMs for tool use has also sparked interest in building agents with LLM capabilities, such as agents that can call different AI models (Shen et al., 2023; Li et al., 2023a), embodied lifelong learning or multimodal agents (Wang et al., 2023a; Driess et al., 2023), and multiple agents interacting with each other and even building a micro-society (Chen et al., 2023b; Li et al., 2023b; Xu et al., 2023d; Hong et al., 2023). 6.4 LLM FOR CODING Previous research has demonstrated that LLMs possess remarkable capabilities in code understanding and generation, particularly those with massive numbers of parameters (Chowdhery et al., 2022; Anil et al., 2023; Rae et al., 2021; Hoffmann et al., 2022). Moreover, several LLMs have been pre- trained, continued pre-trained, or fine-tuned on coding-related data, which has resulted in significantly improved performance compared to general-purpose LLMs. These models include Codex Chen et al. (2021), AlphaCode (Li et al., 2022), SantaCoder (Allal et al., 2023), Starcoder-Base (Li et al., 2023d), InCoder (Fried et al., 2022), CodeT5 (Wang et al., 2021), CodeGeeX (Zheng et al., 2023), and CODE LLAMA (Rozi`ere et al., 2023). In addition to these models, recent studies have focused on developing specialized alignment techniques for coding, such as Code Llama-Instruct (Rozi`ere et al., 2023) and StarCoder (Li et al., 2023d). | 2309.16609#71 | 2309.16609#73 | 2309.16609 | [
"2305.20050"
] |
2309.16609#73 | Qwen Technical Report | These models can assist developers in various code-related tasks, including code generation (Chen et al., 2021; Austin et al., 2021), code completion (Zhang et al., 2023a), code translation (Szafraniec et al., 2023), bug fixing (Muennighoff et al., 2023), code refinement (Liu et al., 2023c), and code question answering (Liu & Wan, 2021). In a word, LLMs | 2309.16609#72 | 2309.16609#74 | 2309.16609 | [
"2305.20050"
] |
2309.16609#74 | Qwen Technical Report | 21 have the potential to revolutionize the field of coding by providing developers with powerful tools for code comprehension, generation, and related tasks. 6.5 LLM FOR MATHEMATICS LLMs with a certain model scale have been found to possess the ability to perform mathematical reasoning (Wei et al., 2022b; Suzgun et al., 2022). In order to encourage LLMs to achieve better performance on math-related tasks, researchers have employed techniques such as chain-of-thought prompting (Wei et al., 2022c) and scratchpad (Nye et al., 2021), which have shown promising results. Additionally, self-consistency (Wang et al., 2022) and least-to-most prompting (Zhou et al., 2022) have further improved the performance of these models on these tasks. However, prompt engineering is a time-consuming process that requires a lot of trial and error, and it is still difficult for LLMs to consistently perform well or achieve satisfactory results in solving mathematical problems. Moreover, simply scaling the data and model size is not an efficient way to improve a modelâ | 2309.16609#73 | 2309.16609#75 | 2309.16609 | [
"2305.20050"
] |
2309.16609#75 | Qwen Technical Report | s mathematical reasoning abilities. Instead, pretraining on math-related corpora has been shown to consistently enhance these capabilities (Hendrycks et al., 2021; Lewkowycz et al., 2022; Taylor et al., 2022; Lightman et al., 2023). Additionally, fine-tuning on math-related instruction-following datasets (Si et al., 2023; Yuan et al., 2023a; Luo et al., 2023a; Yue et al., 2023; Chern et al., 2023a; Yu et al., 2023), has also been effective and more cost-effective than math-specific pretraining. Despite their limitations in terms of accuracy, LLMs still have significant potential to assist users with practical mathematical problems. | 2309.16609#74 | 2309.16609#76 | 2309.16609 | [
"2305.20050"
] |
2309.16609#76 | Qwen Technical Report | There is ample scope for further development in this area. # 7 CONCLUSION In this report, we present the QWEN series of large language models, which showcase the latest advancements in natural language processing. With 14B, 7B, and 1.8B parameters, these models have been pre-trained on massive amounts of data, including trillions of tokens, and fine-tuned using cutting-edge techniques such as SFT and RLHF. Additionally, the QWEN series includes specialized models for coding and mathematics, such as CODE-QWEN, CODE-QWEN-CHAT, and MATH-QWEN- CHAT, which have been trained on domain-specific data to excel in their respective fields. Our results demonstrate that the QWEN series is competitive with existing open-source models and even matches the performance of some proprietary models on comprehensive benchmarks and human evaluation. We believe that the open access of QWEN will foster collaboration and innovation within the community, enabling researchers and developers to build upon our work and push the boundaries of what is possible with language models. By providing these models to the public, we hope to inspire new research and applications that will further advance the field and contribute to our understanding of the variables and techniques introduced in realistic settings. In a nutshell, the QWEN series represents a major milestone in our development of large language models, and we are excited to see how it will be used to drive progress and innovation in the years to come. | 2309.16609#75 | 2309.16609#77 | 2309.16609 | [
"2305.20050"
] |
2309.16609#77 | Qwen Technical Report | 22 # REFERENCES Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. SantaCoder: Donâ t reach for the stars! arXiv preprint arXiv:2301.03988, 2023. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Co- jocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. | 2309.16609#76 | 2309.16609#78 | 2309.16609 | [
"2305.20050"
] |
2309.16609#78 | Qwen Technical Report | Falcon-40B: An open large language model with state-of-the-art performance, 2023. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. PaLM 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Anthropic. | 2309.16609#77 | 2309.16609#79 | 2309.16609 | [
"2305.20050"
] |
2309.16609#79 | Qwen Technical Report | Introducing Claude, 2023a. URL https://www.anthropic.com/index/ introducing-claude. Anthropic. Claude 2. Technical report, Anthropic, 2023b. URL https://www-files. anthropic.com/production/images/Model-Card-Claude-2.pdf. Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q Tran, Dara Bahri, Jianmo Ni, et al. | 2309.16609#78 | 2309.16609#80 | 2309.16609 | [
"2305.20050"
] |
2309.16609#80 | Qwen Technical Report | ExT5: Towards extreme multi-task scaling for transfer learning. arXiv preprint arXiv:2111.10952, 2021. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. | 2309.16609#79 | 2309.16609#81 | 2309.16609 | [
"2305.20050"
] |
2309.16609#81 | Qwen Technical Report | AutoGPT. AutoGPT: The heart of the open-source agent ecosystem, 2023. URL https:// github.com/Significant-Gravitas/Auto-GPT. Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450. Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang andf Zeyu Cui, Yu Han, Shuai Bai, Wenbin Ge, Jianxin Ma, Junyang Lin, Jingren Zhou, and Chang Zhou. | 2309.16609#80 | 2309.16609#82 | 2309.16609 | [
"2305.20050"
] |
2309.16609#82 | Qwen Technical Report | OFASys: A multi-modal multi-task learning system for building generalist models. CoRR, abs/2212.04408, 2022a. doi: 10.48550/arXiv.2212.04408. URL https://doi.org/10.48550/arXiv.2212.04408. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. | 2309.16609#81 | 2309.16609#83 | 2309.16609 | [
"2305.20050"
] |
2309.16609#83 | Qwen Technical Report | Qwen-VL: A versatile vision-language model for understanding, localization, text reading, and beyond. CoRR, abs/2308.12966, 2023. doi: 10.48550/arXiv.2308.12966. URL https://doi.org/10.48550/arXiv.2308.12966. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. | 2309.16609#82 | 2309.16609#84 | 2309.16609 | [
"2305.20050"
] |
2309.16609#84 | Qwen Technical Report | Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022b. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022c. | 2309.16609#83 | 2309.16609#85 | 2309.16609 | [
"2305.20050"
] |
2309.16609#85 | Qwen Technical Report | Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020. 23 Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Con- ference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 7432â | 2309.16609#84 | 2309.16609#86 | 2309.16609 | [
"2305.20050"
] |
2309.16609#86 | Qwen Technical Report | 7439. AAAI Press, 2020. doi: 10.1609/aaai.v34i05.6239. URL https://doi.org/10.1609/aaai.v34i05.6239. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. GPT-NeoX-20B: | 2309.16609#85 | 2309.16609#87 | 2309.16609 | [
"2305.20050"
] |
2309.16609#87 | Qwen Technical Report | An open-source autoregressive language model. arXiv preprint arXiv:2204.06745, 2022. bloc97. NTK-aware scaled RoPE allows LLaMA models to have extended (8k+) con- URL text size without any fine-tuning and minimal perplexity degradation., 2023. https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_ scaled_rope_allows_llama_models_to_have/. | 2309.16609#86 | 2309.16609#88 | 2309.16609 | [
"2305.20050"
] |
2309.16609#88 | Qwen Technical Report | Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. | 2309.16609#87 | 2309.16609#89 | 2309.16609 | [
"2305.20050"
] |
2309.16609#89 | Qwen Technical Report | Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. ChatGLM2 Team. ChatGLM2-6B: An open bilingual chat LLM, 2023. URL https://github. com/THUDM/ChatGLM2-6B. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pond´e de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. | 2309.16609#88 | 2309.16609#90 | 2309.16609 | [
"2305.20050"
] |
2309.16609#90 | Qwen Technical Report | Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. URL https://arxiv. org/abs/2107.03374. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595, 2023a. Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, et al. | 2309.16609#89 | 2309.16609#91 | 2309.16609 | [
"2305.20050"
] |
2309.16609#91 | Qwen Technical Report | Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents. arXiv preprint arXiv:2308.10848, 2023b. Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, et al. Phoenix: Democratizing ChatGPT across languages. arXiv preprint arXiv:2304.10453, 2023c. | 2309.16609#90 | 2309.16609#92 | 2309.16609 | [
"2305.20050"
] |
2309.16609#92 | Qwen Technical Report | Ethan Chern, Haoyang Zou, Xuefeng Li, Jiewen Hu, Kehua Feng, Junlong Li, and Pengfei Liu. Generative ai for math: Abel. https://github.com/GAIR-NLP/abel, 2023a. I Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu, et al. | 2309.16609#91 | 2309.16609#93 | 2309.16609 | [
"2305.20050"
] |
2309.16609#93 | Qwen Technical Report | Factool: Factuality detection in generative aiâ a tool augmented framework for multi-task and multi-domain scenarios. arXiv preprint arXiv:2307.13528, 2023b. David Chiang and Peter Cholak. Overcoming a theoretical limitation of self-attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 7654â 7664, 2022. 24 | 2309.16609#92 | 2309.16609#94 | 2309.16609 | [
"2305.20050"
] |
2309.16609#94 | Qwen Technical Report | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/. | 2309.16609#93 | 2309.16609#95 | 2309.16609 | [
"2305.20050"
] |
2309.16609#95 | Qwen Technical Report | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. | 2309.16609#94 | 2309.16609#96 | 2309.16609 | [
"2305.20050"
] |
2309.16609#96 | Qwen Technical Report | Deep reinforcement learning from human preferences. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neu- ral Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 4299â 4307, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ d5e2c0adad503c91f91df240d0cd4e49-Abstract.html. | 2309.16609#95 | 2309.16609#97 | 2309.16609 | [
"2305.20050"
] |
2309.16609#97 | Qwen Technical Report | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. | 2309.16609#96 | 2309.16609#98 | 2309.16609 | [
"2305.20050"
] |
2309.16609#98 | Qwen Technical Report | Boolq: Exploring the surprising difficulty of natural yes/no questions. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 2924â 2936. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1300. URL https://doi.org/10.18653/v1/n19-1300. | 2309.16609#97 | 2309.16609#99 | 2309.16609 | [
"2305.20050"
] |
2309.16609#99 | Qwen Technical Report | Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457, 2018. URL http://arxiv.org/abs/1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Fran- cisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116, 2019. Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. | 2309.16609#98 | 2309.16609#100 | 2309.16609 | [
"2305.20050"
] |
2309.16609#100 | Qwen Technical Report | Free Dolly: Introducing the worldâ s first truly open instruction-tuned LLM, 2023. URL https://www.databricks.com/blog/2023/04/ 12/dolly-first-open-commercially-viable-instruction-tuned-llm. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng, Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. InstructBLIP: | 2309.16609#99 | 2309.16609#101 | 2309.16609 | [
"2305.20050"
] |
2309.16609#101 | Qwen Technical Report | Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R´e. FlashAt- In NeurIPS, URL http://papers.nips.cc/paper_files/paper/2022/hash/ tention: 2022. 67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html. | 2309.16609#100 | 2309.16609#102 | 2309.16609 | [
"2305.20050"
] |
2309.16609#102 | Qwen Technical Report | Fast and memory-efficient exact attention with io-awareness. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International conference on machine learning, pp. 933â 941. PMLR, 2017. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. LLM.int8(): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339, 2022. | 2309.16609#101 | 2309.16609#103 | 2309.16609 | [
"2305.20050"
] |
2309.16609#103 | Qwen Technical Report | 25 Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. QLoRA: Efficient finetuning of quantized LLMs. arXiv preprint arXiv:2305.14314, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. | 2309.16609#102 | 2309.16609#104 | 2309.16609 | [
"2305.20050"
] |
2309.16609#104 | Qwen Technical Report | Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. | 2309.16609#103 | 2309.16609#105 | 2309.16609 | [
"2305.20050"
] |
2309.16609#105 | Qwen Technical Report | Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. Nan Du, Yanping Huang, Andrew M Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. GLaM: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, pp. 5547â | 2309.16609#104 | 2309.16609#106 | 2309.16609 | [
"2305.20050"
] |
2309.16609#106 | Qwen Technical Report | 5569. PMLR, 2022. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. GLM: General language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360, 2021. Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. | 2309.16609#105 | 2309.16609#107 | 2309.16609 | [
"2305.20050"
] |
2309.16609#107 | Qwen Technical Report | Understanding dataset difficulty with V-usable information. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 5988â 6008. PMLR, 17â 23 Jul 2022. William Fedus, Barret Zoph, and Noam Shazeer. | 2309.16609#106 | 2309.16609#108 | 2309.16609 | [
"2305.20050"
] |
2309.16609#108 | Qwen Technical Report | Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. The Journal of Machine Learning Research, 23(1): 5232â 5270, 2022. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. GPTQ: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida I. Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen tau Yih, Luke Zettlemoyer, and Mike Lewis. | 2309.16609#107 | 2309.16609#109 | 2309.16609 | [
"2305.20050"
] |
2309.16609#109 | Qwen Technical Report | Incoder: A generative model for code infilling and synthesis. ArXiv, abs/2204.05999, 2022. Google. An important next step on our AI journey, 2023. URL https://blog.google/ technology/ai/bard-google-ai-search-updates/. Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with Gaussian error linear units. CoRR, abs/1606.08415, 2016. URL http://arxiv.org/abs/1606. 08415. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. | 2309.16609#108 | 2309.16609#110 | 2309.16609 | [
"2305.20050"
] |
2309.16609#110 | Qwen Technical Report | Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023. 26 Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, and Hang Zhao. Chatdb: Augmenting llms with databases as their symbolic memory. arXiv preprint arXiv:2306.03901, 2023. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. | 2309.16609#109 | 2309.16609#111 | 2309.16609 | [
"2305.20050"
] |
2309.16609#111 | Qwen Technical Report | LoRA: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. Hai Hu, Kyle Richardson, Liang Xu, Lu Li, Sandra K¨ubler, and Lawrence S. Moss. OCNLI: original chinese natural language inference. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pp. 3512â | 2309.16609#110 | 2309.16609#112 | 2309.16609 | [
"2305.20050"
] |
2309.16609#112 | Qwen Technical Report | 3526. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.findings-emnlp.314. URL https: //doi.org/10.18653/v1/2020.findings-emnlp.314. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. | 2309.16609#111 | 2309.16609#113 | 2309.16609 | [
"2305.20050"
] |
2309.16609#113 | Qwen Technical Report | C-Eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. Hugging Face. Transformers agents, 2023. URL https://huggingface.co/docs/ transformers/transformers_agents. Baichuan Inc. Baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan- Inc, 2023a. URL https://github.com/baichuan-inc/Baichuan-7B. large language model devel- XVERSE-13B: A multilingual oped by XVERSE Technology Inc., 2023b. URL https://github.com/xverse-ai/ XVERSE-13B. | 2309.16609#112 | 2309.16609#114 | 2309.16609 | [
"2305.20050"
] |
2309.16609#114 | Qwen Technical Report | InternLM Team. InternLM: A multilingual language model with progressively enhanced capabilities, 2023. URL https://github.com/InternLM/InternLM. Shantanu Jain. tiktoken: A fast BPE tokeniser for use with OpenAIâ s models, 2022. URL https: //github.com/openai/tiktoken/. Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. | 2309.16609#113 | 2309.16609#115 | 2309.16609 | [
"2305.20050"
] |
2309.16609#115 | Qwen Technical Report | Exploring the impact of instruction data scaling on large language models: An empirical study on real-world use cases. arXiv preprint arXiv:2303.14742, 2023. Zixuan Jiang, Jiaqi Gu, Hanqing Zhu, and David Z. Pan. Pre-RMSNorm and Pre-CRMSNorm transformers: Equivalent and efficient pre-LN transformers. CoRR, abs/2305.14858, 2023. doi: 10.48550/arXiv.2305.14858. URL https://doi.org/10.48550/arXiv.2305.14858. | 2309.16609#114 | 2309.16609#116 | 2309.16609 | [
"2305.20050"
] |
2309.16609#116 | Qwen Technical Report | Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. | 2309.16609#115 | 2309.16609#117 | 2309.16609 | [
"2305.20050"
] |
2309.16609#117 | Qwen Technical Report | Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics, 7:452â 466, 2019. doi: 10.1162/tacl\ a\ 00276. URL https://doi.org/10. 1162/tacl_a_00276. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. | 2309.16609#116 | 2309.16609#118 | 2309.16609 | [
"2305.20050"
] |
2309.16609#118 | Qwen Technical Report | Efficient memory management for large language model serving with PagedAttention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. LangChain, Inc. LangChain: Building applications with LLMs through composability, 2023. URL https://python.langchain.com/. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. | 2309.16609#117 | 2309.16609#119 | 2309.16609 | [
"2305.20050"
] |
2309.16609#119 | Qwen Technical Report | GShard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020. 27 Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. | 2309.16609#118 | 2309.16609#120 | 2309.16609 | [
"2305.20050"
] |
2309.16609#120 | Qwen Technical Report | Solving quantitative reasoning problems with language models, 2022. Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, et al. ModelScope-Agent: Building your customizable agent system with open-source large language models. arXiv preprint arXiv:2309.00986, 2023a. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. | 2309.16609#119 | 2309.16609#121 | 2309.16609 | [
"2305.20050"
] |
2309.16609#121 | Qwen Technical Report | Camel: Communicative agents for â mindâ exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023b. Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. CMMLU: Measuring massive multitask language understanding in Chinese. arXiv preprint arXiv:2306.09212, 2023c. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, JoË ao Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Moustafa-Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos MuË noz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. | 2309.16609#120 | 2309.16609#122 | 2309.16609 | [
"2305.20050"
] |
2309.16609#122 | Qwen Technical Report | StarCoder: May the source be with you! CoRR, abs/2305.06161, 2023d. doi: 10.48550/arXiv.2305.06161. URL https://doi.org/10.48550/arXiv.2305.06161. Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson dâ Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. | 2309.16609#121 | 2309.16609#123 | 2309.16609 | [
"2305.20050"
] |
2309.16609#123 | Qwen Technical Report | Competition-level code generation with AlphaCode. CoRR, abs/2203.07814, 2022. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letâ s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Chenxiao Liu and Xiaojun Wan. CodeQA: A question answering dataset for source code com- prehension. | 2309.16609#122 | 2309.16609#124 | 2309.16609 | [
"2305.20050"
] |
2309.16609#124 | Qwen Technical Report | In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pp. 2618â 2632. Associa- tion for Computational Linguistics, 2021. doi: 10.18653/v1/2021.findings-emnlp.223. URL https://doi.org/10.18653/v1/2021.findings-emnlp.223. | 2309.16609#123 | 2309.16609#125 | 2309.16609 | [
"2305.20050"
] |
2309.16609#125 | Qwen Technical Report | Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023a. Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, and Jie Tang. WebGLM: Towards an efficient web-enhanced question answering system with human preferences. arXiv preprint arXiv:2306.07906, 2023b. | 2309.16609#124 | 2309.16609#126 | 2309.16609 | [
"2305.20050"
] |
2309.16609#126 | Qwen Technical Report | Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019. 28 Yue Liu, Thanh Le-Cong, Ratnadira Widyasari, Chakkrit Tantithamthavorn, Li Li, Xuan-Bach Dinh Le, and David Lo. | 2309.16609#125 | 2309.16609#127 | 2309.16609 | [
"2305.20050"
] |
2309.16609#127 | Qwen Technical Report | Refining ChatGPT-generated code: Characterizing and mitigating code quality issues. CoRR, abs/2307.12596, 2023c. doi: 10.48550/arXiv.2307.12596. URL https: //doi.org/10.48550/arXiv.2307.12596. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. | 2309.16609#126 | 2309.16609#128 | 2309.16609 | [
"2305.20050"
] |
2309.16609#128 | Qwen Technical Report | The Flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, and Jingren Zhou. #InsTag: Instruction tagging for analyzing supervised fine-tuning of large language models. | 2309.16609#127 | 2309.16609#129 | 2309.16609 | [
"2305.20050"
] |
2309.16609#129 | Qwen Technical Report | CoRR, abs/2308.07074, 2023. doi: 10.48550/arXiv.2308.07074. URL https://doi. org/10.48550/arXiv.2308.07074. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. WizardMath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023a. | 2309.16609#128 | 2309.16609#130 | 2309.16609 | [
"2305.20050"
] |
2309.16609#130 | Qwen Technical Report | Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. WizardCoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023b. Mosaic ML. MPT-30B: Raising the bar for open-source foundation models, 2023. URL https: //www.mosaicml.com/blog/mpt-30b. | 2309.16609#129 | 2309.16609#131 | 2309.16609 | [
"2305.20050"
] |
2309.16609#131 | Qwen Technical Report | Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786, 2022. Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. | 2309.16609#130 | 2309.16609#132 | 2309.16609 | [
"2305.20050"
] |
2309.16609#132 | Qwen Technical Report | OctoPack: Instruction tuning code large language models. CoRR, abs/2308.07124, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. WebGPT: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. | 2309.16609#131 | 2309.16609#133 | 2309.16609 | [
"2305.20050"
] |
2309.16609#133 | Qwen Technical Report | Maxwell Nye, Anders Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models. ArXiv, abs/2112.00114, 2021. OpenAI. Introducing ChatGPT, 2022. URL https://openai.com/blog/chatgpt. OpenAI. ChatML, 2022. URL https://github.com/openai/openai-python/blob/ e389823ba013a24b4c32ce38fa0bd87e6bccae94/chatml.md. OpenAI. GPT4 technical report. arXiv preprint arXiv:2303.08774, 2023. OpenCompass Team. OpenCompass: | 2309.16609#132 | 2309.16609#134 | 2309.16609 | [
"2305.20050"
] |
2309.16609#134 | Qwen Technical Report | A universal evaluation platform for foundation models, 2023. URL https://opencompass.org.cn/leaderboard-llm. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. | 2309.16609#133 | 2309.16609#135 | 2309.16609 | [
"2305.20050"
] |
2309.16609#135 | Qwen Technical Report | Training language models to follow instructions with human feedback. In NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/ b1efde53be364a73914f58805a001731-Abstract-Conference.html. 29 Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. | 2309.16609#134 | 2309.16609#136 | 2309.16609 | [
"2305.20050"
] |
2309.16609#136 | Qwen Technical Report | The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics, 2016. doi: 10.18653/v1/ p16-1144. URL https://doi.org/10.18653/v1/p16-1144. | 2309.16609#135 | 2309.16609#137 | 2309.16609 | [
"2305.20050"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.