doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.07924 | 33 | 3Based on official API prices for July 2023.
10
Class Not Used Infinite Loop ⢠Methods Not Called Missing Code Segments © Missing Initialization | Missing Loop Logics | Not Configure Layout or Functionality § Methods Not Implemented ⢠Modules Not Imported Missing Comments BE Missing Exception Handling © Calling without Correct Arguments Class Defined Twice Missing Files | Not Correctly Processing Data Not Handle Exceptions | Not Handle Cases Others Type Errors Use Other Layouts
Figure 7: Distribution of Reviewerâs Suggestions. Each color in the pie chart represents a specific category of suggestions provided by the reviewer. | 2307.07924#33 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 33 | 2-bit modelâs performance can be further en- hanced with ï¬ne-grained substructure quanti- zation. In Figure 2, we report the results that pre- serve FP16 precision for speciï¬c weights of impor- tant substructures, denoted as ⬠crucial weightsâ. As discussed before, we ï¬rst consider down pro- jections of FFN as crucial weights. In addition,
we also consider preserving more important sub- structures from the the attention component, and select two types of projections with the highest layer-wise quantization error within the attention component based on GPTQ. Specially, we choose the query and key projections for the LLaMA- 7B model, the key and output projections for the LLaMA-13B model. The results show consistent improvements compared with the variant that sim- ply preserves the entire FFN component (denoted by ¬FFN). Although we preserve substructures in both attention and FFN components, we still have a reduced memory footprint compared to the variant ¬FFN (see the green dotted line). More results of GSM8K and WikiText are reported in Figure 4 in Appendix A.1. These observations show the signiï¬cance of exploring ï¬ne-grained quantization strategy in extreme 2-bit quantization. | 2307.08072#33 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 33 | TC (Text Completion). The task is to predict a writing continuation given a preceding prompt. We focus on multi-sentence paragraph completion for a target evaluation of discourse modeling, which completes a multi-sentence paragraph xs:e given its leading sentence xs. We use the same data collected for the TI task to construct the TC dataset. Speciï¬cally, given a sentence xâ2, we aim to predict the concatenation of xâ1, x0, x1, and x2. We use the same metrics as TI task.
# 3.4 Human Evaluation on Benchmark Quality
In this section, we assess the quality of the proposed benchmarks, as listed in Table 2. For the language understanding testsets that require human annotations, we follow Mitani, Freer, and Nelson (2017) to calculate the inter-annotator agreement via Cohenâs kappa (0â¼1). The annotators reach high agreement on the testsets of understanding tasks, especially on the MRC testset, which annotates the correct answer from 2â¼4 choices. | 2307.08074#33 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 34 | Figure 7: Distribution of Reviewerâs Suggestions. Each color in the pie chart represents a specific category of suggestions provided by the reviewer.
placeholder labels (such as âpassâ in Python) to be further completed. Additionally, the dialogue frequently addresses the topic of âmodules not importedâ (19.70%). This issue emerges from the nature of code generation, where the generated code tends to overlook minor details. However, in the context of code generation, ensuring the codeâs executability becomes crucial. Fortunately, the thought instruction mechanism proposed in this paper effectively tackles these issues by compelling the reviewer to identify incomplete methods and requiring the programmer to fill them. This mechanism can be applied to other scenarios where tasks are completed based on large models but with certain parts missing. Interestingly, the reviewer also emphasizes the importance of code robustness. They underscore considerations for handling potential exceptions in the future and offer hints on avoiding duplicate categories (3.03%). Additionally, the reviewer provides suggestions regarding unused classes in the code (1.52%), identifies infinite loops (1.52%), and emphasizes the necessity of proper environment initialization (1.52%). | 2307.07924#34 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 34 | # 4.2 Fine-tuning Compensation Analysis
# 4.2.1 Experimental Setup
Recently, there are several attempts that employ ï¬ne-tuning to achieve quantization performance compensation (Yao et al., 2023b; Dettmers et al., 2023). Inspired by these studies, we also consider examining the effect of ï¬ne-tuning for quantiza- tion performance, and set up two experiments ac- cordingly: ï¬ne-tuning before quantization and ï¬ne- tuning based on the quantized model weights. In both settings, we mainly consider 2-bit and 4-bit quantization for model weights. For model sizes, we perform ï¬ne-tuning on LLaMA models of 7B and 13B in the ï¬rst setting. In the second setting, we conduct ï¬ne-tuning on quantized LLaMA mod- els of 7B, 13B and 65B. Throughout the experi- ments of this part, we report the results obtained on
the MMLU, GSM8K, and AutoEval tasks. Next, we detail the ï¬ne-tuning method separately. | 2307.08072#34 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 34 | For the translation and generation testsets, we randomly choose 100 instances for each task, and ask two human annotators to assess their quality in terms of ï¬uency (1â¼5) and adequacy/coherence (1â¼5). We follow Kreutzer, Uyheng, and Riezler (2018); Popovic
8https://github.com/stanfordnlp/CoreNLP.
11
# Preprint
Preprint
Volume 1, Number 1
Table 3: Chinese Examples of cohesion phenomena in our test suite. | 2307.08074#34 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 35 | Tester-Programmer Dialogue Analysis In a similar fashion, we analyze the debug dialogue between the tester and the programmer during the testing phase and categorize the main types of bugs encountered. The results are presented in Figure 8. As observed in the figure, the most frequent debug issue between the tester and the programmer is âmodule not foundâ (45.76%), accounting for nearly half of the cases. This reflects the modelâs tendency to overlook very fine details, despite their simplicity. Fortunately, with the thought instruction mechanism proposed in this paper, such bugs can often be easily resolved by importing the required class or method. The second most common types of errors are âattribute errorâ and âunknown optionâ, each accounting for 15.25% of the cases. âattribute errorâ refers to errors in the usage of class attributes, while âunknown optionâ indicates errors in the parameters of method calls. Another common type of error is âimport errorâ which is similar to âmodule not foundâ and is primarily caused by mistakes in the import statements, such as importing the wrong class or using an incorrect import path. In addition to these common error types, ChatDev has the capability to detect relatively | 2307.07924#35 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 35 | the MMLU, GSM8K, and AutoEval tasks. Next, we detail the ï¬ne-tuning method separately.
Pre-Quantization Fine-tuning In this experi- ment, we consider a common setting where an optimized model needs to be quantized for practi- cal deployment. For the ICL ability test, we fol- low Dettmers et al. (2023) and evaluate the impact of ï¬ne-tuning using the Alpaca dataset (Taori et al., 2023). For CoT ability testing, we follow Chung et al. (2022) and use the CoT collection, a mixture of nine datasets with CoT annotations written by human raters. For IF ability test, we follow (Taori et al., 2023) to ï¬ne-tune LLaMA models on Alpaca dataset since it is reported to beneï¬t LLaMA mod- els in instruction following. Additionally, we incor- porate LoRA (Hu et al., 2022) to explore the im- pacts of parameter-efï¬cient ï¬ne-tuning on LLMs. | 2307.08072#35 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 35 | Type Repetition Synonyms Example ä½å¥ ç¬èª è¿èº« åæ¥å° æåè¡ ... ç¦»å¼ æåè¡ å ä½å¥ è ç³»äº é©å®¶å
¬å ç人 ... (Youge went alone and returned to the auction house ... After leaving the auction house, Youge contacted the son of the Han family and others ...) 髿 ä¸ä¸å® è¦ å¾ è±ä¿ ... ä½ ä¸è¦ çå° ä¸ä¸ª é¿å¾ å¸
ç æ³å¸ 就说 æ¯ ... (A master does not necessarily have to be very handsome ... Donât say a good-looking wizard is... when you see one.) å¸
â [ä¸é|æªå¼| ...] (good-looking [ugly|weird|...]) â Ellipsis Substitution Reference Conjunction åç² é®é: âä¸ç¥ ä½ ä»¬ è¦ä¹° å¤å°äº© æ°´ç° å¢ï¼â ... è¿è³æ´² å°± | 2307.08074#35 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.08072 | 36 | Post-Quantization Fine-tuning We then ex- plore the beneï¬ts of ï¬ne-tuning to address the per- formance decline in the model after quantization. Our goal is to assess how effective ï¬ne-tuning can be in mitigating the negative impact of quantization on model performance. To achieve this, we create a specialized tool for parameter-efï¬cient ï¬ne-tuning of LLaMA models after weight quantization. This tool enables us to ï¬ne-tune LLaMA-65B models at 2-bit precision using just a single A100 80G and outperforms the 16-bit LLaMA-13B model before ï¬ne-tuning, as measured by the MMLU (5-shot). Directly optimizing quantized weights is challenging and typically requires specialized optimization techniques like Quantization-Aware Training (QAT) (Liu et al., 2023). To overcome this obstacle, we draw inspiration from the LoRA ap- proach, which involves trainable rank decomposi- tion matrices for ï¬ne-tuning. However, the original LoRA approach is designed for ï¬xed pre-trained weights and may not be suitable for quantized mod- els. | 2307.08072#36 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 36 | ä½ ä»¬ è¦ä¹° å¤å°äº© æ°´ç° å¢ï¼â ... è¿è³æ´² å°± ç¬ç¬ï¼è¯´éï¼âå¤§æ¦ ä¸¤åæ¥äº© â
å§ï¼â ... (Liu Jia asked, "I donât know how many acres of paddy ï¬elds you want to buy?" ... Lian Fangzhou just smiled and said, "About two thousand acres of â
!") å¨äºæä¹æ¶ æä»¬ è§å°äº è¿å
ã彿¶ æä»¬ é请 ä»åºå¸ é£ä¸ª æä¼ã (We met Mike At that time, we invited him to the party.) ä¸è¿ å¤§å« å´æ¯ æ¯éª¨æç¶ ... ä» ç«å³ 忢 äº æ³ è¯´åº æ´ å¤ ... (However, David was horriï¬ed ... He immediately stopped wanting to say more ...) éæ å¿é æäº ç, ... | 2307.08074#36 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 37 | Figure 9 showcases an example of ChatDev developing a Gomoku game (a.k.a. also Case Study known as âFive in a Rowâ and âGobangâ). In the left, we see the result of a naive software created without GUI. This version of the game can only be played through a command terminal, limiting its interactivity and overall enjoyment. In contrast, by incorporating GUI design, ChatDev can
Not Properly Initialized ⢠Not Packed Correctly © Method Not Correctly Called © Method Not Found 5 Missing Files ⢠Module Not Used Typo in Decorator Names ⢠Module Not Found @ Attribute Error © Unknown Option import Error Others
Figure 8: Distribution of Testerâs Suggestions. Each color in the pie chart represents a specific category of bugs provided by the tester.
11
Programmer Designer Human Image Creation Image placement AAA AAA 4A _ -â didiaasAaaa, ~ Gl tl A aa, e Pay CO BdaaAeAt ata eo) PV) @ 8. 4Assssshad, AAA AAA AAzs e © AMAA AAAAA, fe) AAA AAAAA Aha ign
Figure 9: The producted software of the task: âdesign a basic Gomoku gameâ. | 2307.07924#37 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 37 | To address this issue, we adapt the LoRA ap- proach by replacing its pre-trained weights with quantized weights generated by GPTQ. We per- form this adaptation using pre-trained weights from LLaMA models at various scales, such as 7B, 13B, 30B, and 65B, quantized at 2-bit, 4-bit, and 8-bit precision levels with GPTQ. By incor- porating quantized weights into the LoRA frame- work, we achieve an impressive reduction in mem- ory consumption. Particularly noteworthy is our ï¬ne-tuning of the LLaMA-65B model, where we
achieve a remarkably low consumption of only 17.8 GiB, highlighting the highly efï¬cient utilization of parameters. The code for this work is imple- mented using GPTQ and LoRA and is available as an open-source project on https://github.com/ RUCAIBox/QuantizedEmpirical. | 2307.08072#37 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 37 | David was horriï¬ed ... He immediately stopped wanting to say more ...) éæ å¿é æäº ç, ... ä¸è¿ï¼è¿æ¶å åºè¨ é¡¶æï¼æ¾ç¶ æ¯ ä¸ææºçã (Chen Xu was somewhat doubtful, ... however, it was obvi- ously unwise to contradict at this time.) at nine oâclock on Tuesday evening. â
â æ°´ç° (â
â paddy ï¬elds) 彿¶ â å¨äºæä¹æ¶ (At that time â nine oâclock Tuesday on evening) ä» â [她|å®|...] (He â [She|It|...]) | 2307.08074#37 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 38 | Figure 9: The producted software of the task: âdesign a basic Gomoku gameâ.
create a visually appealing small game. This version surpasses the interface-less version in terms of interactivity and user experience, providing a more enjoyable and engaging gameplay environment. Furthermore, ChatDevâs designer can assist the programmer in creating additional graphics to enhance the GUIâs aesthetics and usability, without compromising its functionality. These graphics, carefully crafted by the designer, contribute to making the GUI more visually pleasing and user-friendly.
Additionally, if human users are unsatisfied with the images created by the art designer, they have the flexibility to manually replace the original images after ChatDev completes the software. This allows for further customization according to usersâ preferences, without affecting the softwareâs core functionality. Users can tailor the visual elements to their liking, resulting in a personalized software experience that aligns with their individual preferences.
For a more comprehensive understanding, we exemplify the dialogue processes that make program- ming language choices in designing. More exemplary dialogues extracted from the chat chain of the Gomoku game are shown in Appendix A, including the prompts we designed and the dialogue process between agents at each phase. Please note that, due to space constraints, we only display key information during the dialogue, omitting overly fine-grained details. | 2307.07924#38 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 38 | 4.2.2 Results and Analysis The beneï¬ts of pre-quantization ï¬ne-tuning en- counter signiï¬cant decline at 2-bit precision. We conduct comparison experiments involving full-parameter ï¬ne-tuning (FFT) and parameter- efï¬cient ï¬ne-tuning with LoRA on the FP16 model, followed by the quantization with GPTQ. The re- sults are summarized in the Table 4. Compared with the base model, the FFT approach yields no- table improvements on MMLU, GSM8K, and Au- toEval. When employing 4-bit quantization, we observe that the beneï¬ts obtained from FFT are retained, with almost no performance degradation on MMLU and AutoEval. However, when using extreme 2-bit quantization, the gains from FFT decrease substantially, particularly in the case of GSM8K (i.e., 2.6 for the LLaMA-7B and 2.0 for the LLaMA-13B). It is worth noting that the LLMâs CoT capability is signiï¬cantly compromised in this case | 2307.08072#38 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 38 | (2021) to calculate inter-annotator agreement via Krippendorffâs α(0â¼1) (Krippendorff 2013). Clearly, all outputs are ï¬uent and highly correlated with the input sentences (i.e. > 4) with reasonable agreement, showing that the proposed benchmark has high quality.
# 4. Disco-Bench Diagnostic Test Suite
The general-purpose automatic metrics (e.g. BLEU and PPL) may be not sufï¬cient to distinguish model performance in terms of discourse (Wong and Kit 2012; Müller et al. 2018; Voita et al. 2018; Voita, Sennrich, and Titov 2019; Lin, Ng, and Kan 2011). To better measure the ability of models on discourse modeling, we handcraft a discourse-aware test suite that is complementary to general evaluation.
# 4.1 Deï¬nition and Annotation
We adapt the idea of contrastive testing in our approach (Bawden et al. 2018; Voita, Sennrich, and Titov 2019; Cai and Xiong 2020; He, Long, and Xiong 2022; Wang et al. 2023a) and propose a test suite of cohesion for both English and Chinese languages (as shown in Table 3).
12
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark | 2307.08074#38 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 39 | Designing: Programming Language : CEO (Chief Executive Officer) : CTO (Chief Technology Officer) : To satisfy the new userâs demand and make a desktop application realizable, you should propose a concrete programming language. : Python with the Pygame library is a suitable choice for implementing a basic Gomoku game. : Great! Python is a versatile and widely-used programming language that will be suitable for implementing a basic Gomoku game. Its simplicity and readability make it a popular choice for beginners and experienced developers alike. : Letâs get started with the development of the basic Gomoku game in Python using the Pygame library! : <END> Python. | 2307.07924#39 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.07924 | 40 | 12
âGenerate some medial sofware âGenerate some medical software, Don't Medical Info Tracker information following rule, 23 =" ive descriptions similar to Medical Info âTracker, Medical Image Analyzer. follow rule 1, 2,3.â Medical Image Analyzer âMedical Info Tracker âMedical Symptom Checker Patient Tracker Medical Symptom Checker Medical Diet Planner Medical image Analyzer Patient Care Tracker âGenerate some medical software, Dont give descriptions similar to Medical Info Tracker, âCheck if these generated software Pedi phestontctateey adil nage aah: Masel rete Information follow rule 4, 2,3. âMedical Info Tracker Medical image Analyzer âMedical Diet Planner Check Reports Random Sampling Sequential Sampling Check
Figure 10: The three-stage process of NLDD creation. We only show the generated names of software and the prompt in the figure is for example only.
# 4 NLDD Dataset
We collect and open-source4 a large and diverse dataset named NLDD (Natural Language Dataset for Dev), which contains 1,200 software prompt data for the Natural Language to Software (NL2Software) task. Each sample in NLDD includes the name, description, and category of the software. | 2307.07924#40 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 40 | Parameter-efï¬cient ï¬ne-tuning still lags behind full-parameter ï¬ne-tuning, especially on ICL and CoT tasks. Parameter-efï¬cient ï¬ne-tuning has gained popularity due to its ability to reduce the number of ï¬ne-tuning parameters while retain- ing a decent performance. We include the results of LoRA ï¬ne-tuning in the column âLoRAâ of Table 4. We can see that LoRA can lead to a sub- stantial improvement over the base models in most cases, and the performance superiority from ï¬ne- tuning also retains for 4-bit quantization but not always hold for 2-bit quantization. Furthermore, LoRA still has a signiï¬cant gap compared to FFT (e.g., 25.8 vs. 38.0 on GSM8K). Another ï¬nding is that LoRA ï¬ne-tuning drops substantially on GSM8K under 4-bit quantization, suggesting that full-parameter ï¬ne-tuned models might be more ap- propriate to consider for quantizaiton on complex inference tasks.
Post-quantization ï¬ne-tuning yields substantial performance improvement meanwhile can be | 2307.08072#40 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 40 | Type Input Hypothesis Understanding Task: MRC (Machine Reading Comprehension) Conj. Context: å°å
¬ä¸» ç¬åº åå ¡ã(The little princess escaped from the castle.) Correct: æå 她 èº²è¿ äº æ£®æã(In the end she hid in the forest.) Incorrect: ç¶è 她 èº²è¿ äº æ£®æã(However, she hid in the forest.) å°å
¬ä¸» é è· å å» äº åª é ï¼(Where did the little princess go after she escaped?) (A) åå¢ (Southern Wall) (B) 森æ (Forest) (C) åå ¡ (Castle) Ranking: Context + Correct/Incorrect â Hypothesis â Probability Translation Task: NT (Novel Translation) Refe. Context: å®ç å«ç¬ çç æ¸
éã(King Ding looked at Qingshuang with a smile.) Current: ä» è§å¾ æ¸
é å¾ æ»ç¨½ã Correct: He thinks Qing- shuang is funny. | 2307.08074#40 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 41 | NLDD is created by prompting ChatGPT with a three-stage strategy and well-designed rules. We collect the main software category from four prominent software store platforms including Ubuntu Snap Shop, Google Play Store, Microsoft Store, and Apple App Store. We further sorted out five major categories with 40 subcategories and asked ChatGPT to generate software data for these categories. See Figure 11 in the appendix for all the category details.
To circumvent the generation of repetitive content, NLDD is created with a Query Prompt-based three-stage strategy, including random sampling, prompt sampling, and check. As shown in Figure 10, this strategy initially establishes datasets by random sampling some software data, then records existing data, granting ChatGPT autonomy to produce novel entries.
1. Random Sampling: First, ChatGPT is independently inquired multiple times to obtain software information under a certain category, and then the duplication is removed at the token granularity of the software name.
2. Sequential Sampling: Then we add the generated software information in sequence in the form of negative prompts, requiring ChatGPT to continue generating unique software information.
3. Check: Although ChatGPT has been required to follow certain rules when generating, LLM is more likely to be overconfident when generating according to rules than when judging based on rules. Therefore, our last step is to let ChatGPT determine whether the generated software follows the rules. | 2307.07924#41 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 41 | Post-quantization ï¬ne-tuning yields substantial performance improvement meanwhile can be
#To Bits Base MMLU LoRA FFT Base GSM8K LoRA FFT Base AutoEval LoRA FFT 7B 16-bit 4-bit 2-bit 35.2 34.2 3.8 37.7 35.7 1.2 41.7 40.1 9.0 13.1 13.5 0.0 25.8 22.7 0.0 38.0 35.7 2.6 1121/1134 1092/1335 607/1263 1072/1327 1053/1340 658/1336 1170/1329 1146/1327 647/1297 13B 16-bit 4-bit 2-bit 47.0 46.3 14.8 46.0 46.7 20.7 47.7 46.7 18.4 16.4 16.5 0.0 35.2 30.7 2.3 46.0 44.4 2.0 1084/1335 1119/1321 635/1258 1073/1344 1133/1335 701/1319 1146/1326 1154/1329 615/1223 | 2307.08072#41 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 41 | Current: ä» è§å¾ æ¸
é å¾ æ»ç¨½ã Correct: He thinks Qing- shuang is funny. Incorrect: She think the Qing- shuang is funny. Ranking: Context + Current â Correct/Incorrect â Probability Generation Task: TC (Text Completion) Repe. Context: å¶è¿ ç å³è èåäº æ´ªèé¾éª¨ã(Ye Yuanâs right arm fused with the primordial dragon bone.) Correct: ä½ å¶è¿ æè§ èªå·±ç å³è å¿«è¦æäºã(But Ye Yuan felt as if his right arm was about to break.) Incorrect: ä½ å¶è¿ æè§ èªå·±ç å·¦æ å¿«è¦æäºã(But Ye Yuan felt as if his left hand was about to break.) è¿ ä¸ æ³ ç å¨åï¼å®å¨ æ¯ å¤ª 强äºï¼(The power of this punch is too strong!) Ranking: Context + Correct/Incorrect + Hypothesis â Probability | 2307.08074#41 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 42 | NLDD is created with human-designed rules that make the created software easy for researchers to evaluate, for example, the collected software does not need internet or multi-player participation. It is curated to facilitate research in NL2Software. We also give a visualization and analysis of the created software description in the appendix (see Figure 12 and 13).
4The data is available at https://github.com/OpenBMB/ChatDev/tree/main/NLDD.
13
# 5 Discussion
Even though ChatDev offers a novel paradigm for software development that is training-free, efficient, and cost-effective, we recognize the presence of potential risks and limitations that require further investigation and resolution.
Even when we set the temperature parameter of the large language model to a very low value, we observe inherent randomness in the generated output. Consequently, each software produced may vary between different runs. As a result, this technology is best suited for open and creative software production scenarios where variations are acceptable. Moreover, there are instances where the software fails to meet the usersâ needs. This can be attributed to unclear user requirements and the inherent randomness in text or code generation. | 2307.07924#42 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 42 | Table 4: The results of pre-quantization ï¬ne-tuning on MMLU, GSM8k and AutoEval of LLaMA families. We denote âBaseâ as baseline results without ï¬ne-tuning. âLoRAâ and âFFTâ denote parameter-efï¬cient ï¬ne-tuning LoRA and full-parameter ï¬ne-tuning respectively.
#To Bits #Tr Mem. (M) 0-shot 5-shot (GiB) Base LoRAq Base LoRAq 7B 4-bit 2-bit 20.0 20.0 3.8 2.2 31.0 2.3 31.4 3.7 34.2 3.8 36.8 7.4 13B 4-bit 2-bit 31.3 31.3 7.0 3.9 39.0 4.9 44.1 28.3 45.9 14.8 45.5 28.9 65B 4-bit 2-bit 99.9 99.9 32.7 17.8 57.1 9.0 57.0 42.0 63.0 22.6 60.5 44.4 | 2307.08072#42 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 42 | ⢠Repetition means the repeating of certain words or phrases. We mainly annotate nouns repetition in 4â¼5 neighbouring sentences.
⢠Synonyms means related words that having the same connotations, implications, or reference in two sentences. In our test suite, this phenomenon include nouns and adjectives synonyms in 4â¼5 neighbouring sentences.
⢠Ellipsis means the omission of one or more words that are obviously understood but that must be supplied to make a construction grammatically complete. This omission often happens after wh-words in English and in subject elements in Chinese.
Substitution occurs when one item within a text or discourse is replaced by another. In English, such nouns are often replaced by âoneâ or âsomeâ, and verbs are replaced by âdoâ or âdidâ. In Chinese, this often happens around quantiï¬er or temporal adverbial. ⢠Reference is a relationship between objects in which one object designates, or acts as a
means by which to connect to or link to, another object.
⢠Conjunction expresses a logical semantic relationship between two sentences rather than between words or structures. We mainly annotate additive, adversative, causal, and temporal.
13
Preprint
# Preprint
Volume 1, Number 1
# 4.2 Contrastive Testing | 2307.08074#42 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 43 | While the designer agent is capable of creating images [35], it is important to acknowledge that the directly generated images may not always enhance the GUIâs aesthetics. At times, they may introduce excessive complexity, which can hinder user experience. This is primarily because each image is generated independently, lacking direct visual correlation. To address this, we have provided the option for users to customize the GUI as a system hyperparameter, allowing them to decide whether to enable this feature or not.
Additionally, the large language model may exhibit inherent biases [30], leading to the generation of code patterns that do not necessarily align with the problem-solving thinking of real programmers. Regarding risks, it is important to note that existing large language models are not fully tuned to be harmless, making them vulnerable to potential misuse by malicious users for harmful purposes. Furthermore, the generated software currently lacks malicious intent identification for sensitive file operations. Therefore, users are advised to conduct their own code review before running the software to prevent any unnecessary data loss.
Additionally, the assessment of our ChatDev frameworkâs software-level task completion capabilities presents formidable challenges, owing to the vast scope and heterogeneous nature of the generated tasks. This mandates the active participation of a multitude of domain experts. | 2307.07924#43 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 43 | 13
Preprint
# Preprint
Volume 1, Number 1
# 4.2 Contrastive Testing
Table 4 provides a detailed description of how we formulate these contrastive pairs. Each instance in our methodology comprises a contrastive pair, consisting of a correct and an incorrect input/hypothesis based on cohesion properties. The original content from the test set serves as the correct candidate, while we introduce variations by altering its discourse devices, creating the incorrect candidates. We select one representative task from each type of Disco-Bench Benchmark. Accordingly, we adopt diverse strategies which vary based on the location of modiï¬cation:
¢ MRC (Understanding): To generate an incorrect candidate, we introduce noise into the input, transforming it from x to 2â, while keeping the hypothesis y constant. Thus, each instance contains a correct (x, y) and an incorrect (xâ, y) candidate. We then calculate the probability of the golden label by inputting these into the relevant models.
¢ NT (Translation): We introduce noise into the target translation to generate an incorrect candidate, transitioning y to yâ, while the source input x remains unaltered. Each instance hence contains a correct (x, y) and an incorrect (x, yâ) candidate. Given the input and hypothesis, we calculate the probability of the hypothesis sequence using a forced-decoding method. | 2307.08074#43 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 44 | Although the study may potentially help junior programmers or engineers in real world, it is challeng- ing for the system to generate perfect source code for high-level or large-scale software requirements. This difficulty arises from the agentsâ limited ability to autonomously determine specific implemen- tation details, often resulting in multiple rounds of lengthy discussions. Additionally, large-scale software development proves challenging for both reviewers and testers, as it becomes difficult to identify defects or vulnerabilities within the given time constraints.
# 6 Related Work | 2307.07924#44 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 44 | conducted in a lightweight way. To ï¬ne-tune a quantized model, we make two major modiï¬ca- tions based on the original LoRA method. First, we employed GPTQ to quantize the FP16 model to 2/4 bits. Subsequently, we replace the pre-trained weights of the LoRA method with the quantized weights. The rest steps remain the same as the original LoRA. The experimental results are pre- sented in the column âLoRAqâ of Table 5. Overall, this approach can signiï¬cantly reduces the memory cost during ï¬ne-tuning (see the âMem.â column), enabling the ï¬ne-tuning of a 65B model on a sin- gle NVIDIA A100. A comparison of the results with the base model indicates that the enhancement effect of LoRAq is particularly pronounced at 2 bits (e.g., 44.4 vs. 22.6 for the ï¬ve-shot setting). Notably, under fewer total bits, the 2-bit effect of the 65B model surpasses the non-ï¬ne-tuned 13B model with FP16 precision on zero-shot setting (i.e., 42.0 vs. 41.4). These ï¬ndings | 2307.08072#44 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 44 | ⢠TC (Generation): Similar to the MRC task, we introduce noise into the input while the hypothesis remains unchanged. By combining the input and hypothesis, we directly calculate the probability of the entire sequence.
In conclusion, we have annotated a total of 250 instances for the MRC task, 500 for the NT task, and 250 for the TC task, each marked with 6 different types of cohesion. Given each instance, we assess different models on their ability to rank the correct candidate higher than the incorrect one.
# 5. Disco-Bench Pretraining
# 5.1 Training Data
We present an extensive pretraining dataset (400GB), consisting of both Chinese and English texts, designed to align with the benchmarkâs literature domain. As shown in Table 5, this corpus includes numerous categories, such as Electronic, Modernist, Ancient, and Others, each further divided into speciï¬c genres. For the Chinese language, we offer millions of documents ranging from web ï¬ction to ancient texts. For the English language, the dataset includes a similarly wide range, from web ï¬ction to classical masterpieces and beyond. Overall, this rich dataset provides a thorough foundation for training sophisticated language models, emphasizing the ï¬ne-grained understanding of discourse information. | 2307.08074#44 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 45 | Deep-Learning-based Software Engineering Software engineering (SE) is the process of de- signing, developing, testing and maintaining software in a methodical, rigorous, and measurable manner5. Due to the complexity of software engineering, a significant number of decisions are made based on intuition and, at best, consultation with senior developers With the rapid development of the deep learning (DL) technique, many researchers are devoted to apply DL into SE to improve the effectiveness and efficiency of software development, reducing labor cost. Existing DL-based SE work focuses on five SE stages of the life cycle in software engineering separately [14]: (1) Software requirements is to analysis the user demands and specify the requirements for the soft- ware [34; 46; 13]. (2) Software design involves the specification of the software framework, modules, protocols, and other features that are necessary for the development of a software [27; 38; 47]. (3) Software implementation is the detailed creation procedure of the software to implement the design [16; 1; 6; 29; 11]. (4) Software testing is to verify that the software can provide expected behaviors on a set of test cases [42; 40; 43; 39]. (5) Software maintenance is | 2307.07924#45 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 45 | Comparing our corpus to other commonly used datasets for pretraining models, Disco-Benchâs dataset exhibits distinct attributes and advantages. Most of the currently available corpora, such as the Wikipedia used for Chinese BERT (base), have limited data size, approximately 1.5GB. The multilingual datasets, such as those for BART (large) and mBART (CC25), incorporate Chinese, English, and more languages. However, even though they present a larger size (200GB and 1.4TB respectively), their sources are often conï¬ned to Wikipedia, WuDao Corpus, or Common Crawl. In summary, the Disco-Bench dataset excels in terms of language diversity, corpus size, and the uniqueness of data sources, marking it as a valuable resource for diverse and comprehensive language model pretraining.
14
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
Table 5: Statistics of data for Disco-Bench pretraining. All data are extracted from literature texts with discourse context. We count number of characters in Chinese and number of words in English. | 2307.08074#45 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 46 | Software testing is to verify that the software can provide expected behaviors on a set of test cases [42; 40; 43; 39]. (5) Software maintenance is to provide necessary support for software users, e.g., documentation generation [19; 40; 28; 20]. Despite the impressive performance by adapting DL method into SE, these approaches are isolated, which is only able to accomplish a specific step of the whole procedure of software engineering. Not to mention these DL-based methods require large-scale task-specialized training data to achieve the certain goal, which is unpractical to collect extensive data for the whole procedure of software engineering. | 2307.07924#46 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 46 | Category Genre Size Description # Document # Sentence # Chara./Word Chinese Language Electronic Novel 91,620,211 1,169,127,191 58,639,454,317 Web Fiction Modernist Classical Book 38,495,887 324,912 490,733,235 4,141,874 24,613,514,541 Masterpiece 155,189,807 Publication Ancient Poetry Couplet 378,323 8,979,186 1,495,466 8,979,186 31,746,541 Shi, Ci, Qu, Fu 192,214,600 Antithetical Couplet Classical 1,011 1,947,136 53,721,504 Ancient Text Others Lyrics Screenplay Movie Dialogue 452,715 5,213 66,050 3,642 4,952,039 10,426,213 24,108,241 1,653,469 165,338,679 Worldâs Songs 156,390,000 Movie Script 642,392,397 Movie Subtitle 49,406,618 Talk, Message Total 140,327,150 1,717,564,050 84,699,369,004 English Language Electronic Novel 33,156,134 422,757,234 26,777,401,794 Web Fiction Modernist Classical Book 3,104,507 324,912 | 2307.08074#46 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.08072 | 47 | Emergent Abilities Recent research has re- vealed that some superior abilities in Large Lan- guage Models (LLMs) may not be present in small models, sparking great interest in their capabili- ties (Wei et al., 2022). There are various studies that discuss or explore the effect of emergent abili- ties on different tasks. For example, ICL enables few-shot learning without parameter update, as ex- hibited by GPT-3 (Brown et al., 2020), allowing task knowledge injection (Liu et al., 2022) or de- ploying LLMs in a service paradigm (Sun et al., 2022). CoT breaks down complex reasoning into coherent chains of thought. Models leveraging CoT have shown strong performance surpassing humans on reasoning benchmarks (Fu et al., 2023; OpenAI, 2023). IF aims to precisely execute hu- man instructions, as shown in powerful ChatGPT. Their strong conversational ability and generaliza- tion to unseen tasks demonstrate powerful task un- derstanding (Taori et al., 2023; Chung et al., 2022). Although emergent abilities have been widely stud- ied, there are seldom comprehensive work that fo- cus on evaluating them on quantized LLMs. To bridge this gap in research, our work aims to pro- vide a detailed analysis of how emergent abilities exist on quantized LLMs. | 2307.08072#47 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 47 | English Language Electronic Novel 33,156,134 422,757,234 26,777,401,794 Web Fiction Modernist Classical Book 3,104,507 324,912 39,593,119 4,162,821 2,507,247,359 Masterpiece 78,695,499 Publication Ancient Poetry 2,269 21,456 148,222 Worldâs Poetry Others Lyrics Movie Script Movie Dialogue 3,088,688 2,826 155,670 9,191 110,268,328 12,534,815 56,819,567 4,172,736 632,820,393 Worldâs Songs 67,433,609 Movie Script 315,189,001 Movie Subtitle 27,208,957 Talk, Message Total 39,844,197 650,330,076 30,406,144,834 | 2307.08074#47 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 48 | Multi-Agent Collaboration Large language models (LLMs) have exhibited remarkable proficiency across a wide range of domains. Recently, there exist several work has explored that utilizing the interactions between LLMs to achieve several goals. (1) Behaviour simulation: Park et al. [33] create multiple generative agents with a sandbox environment to simulate believable human behavior. Wang et al. [41] use multiple agents to simulate the user behaviours in the recommendation scenario. (2) Data construction: Wei et al. [45] assign agents with different roles to collect and evaluate multi-party conversations. Li et al. [24] propose a role-playing framework which leverages agents to generate diverse and detailed instructions for complicated tasks. (3) Performance improvement: Salewski et al. [36] find that asking the agent to take on different roles can improve their performance. Du et al. [12] improve the factual correctness and reasoning accuracy by leveraging multi-agent debate. Liang et al. [25] use multiple agents to debate each other to solve the degeneration-of-thought problem in self-reflection. Fu et al. [15] find that multiple agents can improve each other in a negotiation game like | 2307.07924#48 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 48 | Post-Training Quantization Post-training quan- tization (PTQ) has been widely used for reducing memory consumption and computational costs in neural networks. A number of studies have ex- plored the use of PTQ on LLMs, including quan- tization of model weights (Frantar et al., 2022; Dettmers and Zettlemoyer, 2022) and feature acti- vations (Dettmers et al., 2022; Yao et al., 2023b), due to its ability to decrease training requirements
while minimizing performance impact. However, there is still a lack of comprehensive empirical stud- ies evaluating the emergent abilities of quantized LLMs. The most relevant studies to this work are Yao et al. (2023b) and Dettmers and Zettlemoyer (2022). In particular, Yao et al. (2023b) present a detailed analysis of various strategies in PTQ meth- ods on LLMs, and Yao et al. (2023b) explore the inference scaling laws for zero-shot performance for k-bit quantization. These two studies mainly focus on the analysis of overall abilities, whereas we take a special perspective to study emergent abilities in quantized LLMs.
# 6 Conclusion | 2307.08072#48 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 48 | # 5.2 Discourse-Aware Disco-Bench Pretrained Models
The frequencies and types of discourse phenomena vary in different domains (Yang, Liu, and Xue 2015), leading to differences in model behavior and quality across domains. However, most existing pretrained models are trained on datasets without discourse information (e.g. sentence level) or in mixed domains (e.g. Wikipedia and news). Consid- ering that texts in literature domain contains rich discourse phenomena (De Beaugrande and Dressler 1981), we construct a large-scale, in-domain and document-level datasets in Chinese and English. To ï¬ll the gap, we follow Wang et al. (2022) to train the existing pretraining models (coarse-grained pretraining) on the document-level Disco- Bench training data (ï¬ne-grained pretraining) to model discourse phenomena. For each Disco-Bench model, we use the existing pretrained models for weight initialization, and we further train the models on the Disco-Bench data with the same loss. We limit the input length to 512 tokens for RoBERTa models and 1024 tokens for BART, mBART, and
_
_
15
# Preprint
Preprint
Volume 1, Number 1 | 2307.08074#48 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 49 | other to solve the degeneration-of-thought problem in self-reflection. Fu et al. [15] find that multiple agents can improve each other in a negotiation game like buyer-seller dealing by role-playing and learning from the agent feedback. Liu et al. [26] design a simulated social interaction sandbox to achieve social alignment for LLMs. Talebirad et al. [37] introduce multiple agents with unique attributes and roles to handle complex tasks in a black box environment. | 2307.07924#49 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 49 | # 6 Conclusion
In this work, we have conducted an empirical study to examine the impact of post-training quantization on the emergent abilities of LLMs. Our ï¬ndings reveal that large models (ï¬ne-tuned or not) can well retain emergent abilities with 4-bit weight quanti- zation, but experience substantial degradation at 2-bit precision. Moreover, we delve into the ï¬ne- grained components and substructures for studying the quantiztion sensitivity. Our results indicate that LLMs can be enhanced by effectively preserv- ing more crucial components, feature dimensions and substructures for low-bit quantization. Addi- tionally, we have also examined the effect of ï¬ne- tuning for improving the performance of quantized models. Experimental results demonstrate that ï¬ne- tuning can alleviate the performance degradation from low-bit quantization, showing the great poten- tial to enhance the capacity of quantized LLMs.
# References | 2307.08072#49 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 49 | _
_
15
# Preprint
Preprint
Volume 1, Number 1
Table 6: Summary of pretrained models varying in model architecture, parameter scale, training data, and targeted task (i.e. understanding, translation, and generation). #1â¼11 are publicly available. #12 denote a series of pretrained models that are continuously trained on our literature-domain data initialized by corresponding parameters in #1â¼11.
# Model Language Size Task Corpus Size Sources 1 2 3 4 AnchiBERT (base) 5 MengziBERT (base) 6 BART (large) 7 mBART (CC25) 8 GPT2 (base) 9 GPT2 (large) 10 T5 (base) 11 T5 (large) BERT (base) RoBERTa (base) RoBERTa (large) zh zh zh zh zh zh, en zh, en, etc. zh en zh en 1.4TB Common Crawl 14GB CLEU Corpus 40GB Web Text 14GB CLEU Corpus 745GB C4 12 Disco-Bench (family) zh, en â U, T, G 400GB Literature
GPT models. The pretraining hyper-parameters details of the release Disco-Bench models can be found in Table 15.
# 6. Experiments
# 6.1 Setup | 2307.08074#49 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 50 | # 7 Conclusion
In this study, we have presented ChatDev, a chat-based end-to-end software development framework that leverages LLMs to facilitate effective communication and collaboration among multiple roles involved in the software development process. By decomposing the development process into sequential atomic subtasks through the use of the chat chain, ChatDev enables granular focus and promotes desired outputs for each subtask. Additionally, the thought instruction mechanism alleviates challenges related to code hallucinations by guiding programmers through specific code modifications during code completion, reviewing, and testing. Our experimental results demonstrate the efficiency and cost-effectiveness of the automated software development process driven by ChatDev. By employing multiple software agents with different roles, we have proposed a new paradigm in generating software systems, alleviating code vulnerabilities, and identifying and resolving potential bugs. The collaborative interactions and mutual examination between roles within each chat have contributed to effective decision-making for each subtask. | 2307.07924#50 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 50 | # References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. CoRR, abs/2005.14165.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. | 2307.08072#50 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 50 | GPT models. The pretraining hyper-parameters details of the release Disco-Bench models can be found in Table 15.
# 6. Experiments
# 6.1 Setup
Plain Models. We use the Transformer (Vaswani et al. 2017) with base and big conï¬gurations as our plain models. We use the Adam optimizer with β1 = 0.9 and β2 = 0.98, and employed large batching Ott et al. (2018) for model training. We set the max learning rate to 0.0007 and warmup-steps to 16000. All the dropout probabilities are set to 0.3. | 2307.08074#50 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 51 | Moving forward, further research can focus on refining the communication protocols and optimizing the interaction dynamics within each chat to enhance the performance and effectiveness of ChatDev. Additionally, exploring the integration of other emerging technologies, such as reinforcement learning and explainable AI, could provide valuable insights into addressing challenges and improving the overall software development process. Our research will persist in exploring enhancements and ad- vancements in ChatDev agents, workflow, and development environments. The overarching objective is to achieve even greater efficiency in software production by improving various characteristics, such as reducing the length of chat chains or optimizing subtask solving logic and strategies, ultimately leading to more streamlined and effective software production processes. We hope the potential of the proposed natural-language-to-software framework can illuminate fresh possibilities for integrating LLMs into software development and mark the dawn of a new frontier in the field of natural language processing, software engineering, and collective intelligence.
# Contributions | 2307.07924#51 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 51 | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Web- son, Shixiang Shane Gu, Zhuyun Dai, Mirac Suz- gun, Xinyun Chen, Aakanksha Chowdhery, Sha- ran Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-ï¬netuned language models. CoRR, abs/2210.11416.
Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Llm.int8(): 8-bit ma- trix multiplication for transformers at scale. CoRR, abs/2208.07339.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efï¬cient arXiv preprint ï¬netuning of quantized llms. arXiv:2305.14314. | 2307.08072#51 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 51 | Existing Pretrained Models. We systematically compare SOTA pretraining models on our constructed discourse-aware benchmark, including BERT (Devlin et al. 2019), RoBERTa (Cui et al. 2020), AnchiBERT (Tian et al. 2021), MengziBERT (Zhang et al. 2021), BART (Lewis et al. 2020; Shao et al. 2021), mBART (Liu et al. 2020), GPT2 (Radford et al. 2019; Zhao et al. 2019), and T5 (Raffel et al. 2020; Zhao et al. 2019). Table 6 shows the summary information of the pretrained models. We ï¬ne-tuned these public models on the corresponding datasets for downstream tasks. For translation tasks, we use BERT- based pretrained models (e.g. BERT, RoBERTa) to initialize the encoder of NMT models. We choose the hyper-parameters based on the performance on the validation set for each model. We ï¬ne-tune each model twice and report the averaged test results. The ï¬ne-tuning hyper-parameters are detailed in Table 16. We evaluate the following public pretrained models on Disco-Bench Benchmark and Test Suite: | 2307.08074#51 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 52 | # Contributions
The authorsâ contributions are delineated as follows: During the projectâs formulation, Chen Qian, Xin Cong, and Cheng Yang orchestrated the design of the model architecture. In the process of refining the modelâs structure, Dahai Li, Zhiyuan Liu, and Maosong Sun provided invaluable guidance. The main paper was composed by Chen Qian, Xin Cong, Weize Chen, Yusheng Su, and Juyuan Xu, with input from Cheng Yang, Weize Chen, Yusheng Su, Zhiyuan Liu, and Maosong Sun to enhance its clarity. To enhance public accessibility, Wei Liu, Yufan Dang, and Jiahao Li championed the open-source repository through comprehensive testing; Wei Liu restructured and optimized the system. Additionally, Wei Liu and Yufan Dang spearheaded the development of the online demo to ensure wider dissemination.
15
# Acknowledgements
We thank Yujia Qin, Shengding Hu, Yankai Lin, Bowen Li, Jingwei Zuo, Xuanhe Zhou, Shuo Wang, Qimin Zhan, and Yukun Yan for their active participation in various discussions and providing valuable feedback on this research. We also express our gratitude to the AgentVerse, Camel and Gather projects for providing the initial foundation for this project.
# References | 2307.07924#52 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 52 | Tim Dettmers and Luke Zettlemoyer. 2022. The case for 4-bit precision: k-bit inference scaling laws. CoRR, abs/2212.09720.
Elias Frantar, Saleh Ashkboos, Torsten Hoeï¬er, and Dan Alistarh. 2022. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323.
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. 2023. Chain-of-thought hub: A continuous effort to measure large language modelsâ reasoning performance. CoRR, abs/2305.17306.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. | 2307.08072#52 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 52 | ⢠BERT (base): we use the base model (12 layer encoder, hidden size 768, vocabulary size 21128) published by Devlin et al. (2019), which was pretrained on Chinese Wikipedia
16
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
dump of about 0.4 billion tokens using the losses of mask language model (MLM) and next sentence prediction.9
⢠RoBERTa (base): Cui et al. (2020) a model with the same architecture of BERT (base) except it uses whole word masking and is trained on additional 5 billion tokens with only MLM pretrained task. This model uses BERT (base) as the initial weight.10
⢠RoBERTa (large): Cui et al. (2020) the large model size of RoBERTa model (24 layer encoder, hidden size 1024, vocabulary size 21128) This model has the same training procedure of RoBERTa-wwm-ext (base). This model is trained from scratch.11
⢠AnchiBERT: Tian et al. (2021) a model continues pretraining based on the BERT (base) model with the 39.5M anchient Chinese tokens. It uses the same tokenizer and other techniques as BERT-base.12 | 2307.08074#52 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 53 | # References
[1] Mohammad Alahmadi, Abdulkarim Khormi, Biswas Parajuli, Jonathan Hassel, Sonia Haiduc, and Piyush Kumar. Code localization in programming screencasts. Empir. Softw. Eng., 25(2):1536â1572, 2020.
[2] Razvan Azamfirei, Sapna R Kudchadkar, and James Fackler. Large language models and the perils of their hallucinations. Critical Care, 27(1):1â2, 2023.
[3] Youssef Bassil. A simulation model for the waterfall software development life cycle. arXiv preprint arXiv:1205.6904, 2012.
[4] Jorge Biolchini, Paula Gomes Mian, Ana Candida Cruz Natali, and Guilherme Horta Travas- sos. Systematic review in software engineering. System engineering and computer science department COPPE/UFRJ, Technical Report ES, 679(05):45, 2005. | 2307.07924#53 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 53 | Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adapta- tion of large language models. In The Tenth Inter- national Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. Open- Review.net.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for gpt-3? In Proceedings of Deep Learning Inside Out: The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO@ACL 2022, Dublin, Ireland and Online, May 27, 2022, pages 100â114. Association for Computational Lin- guistics.
Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chan- LLM-QAT: data-free quantization dra. 2023.
aware training for large language models. CoRR, abs/2305.17888.
OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. | 2307.08072#53 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 53 | ⢠MengziBERT: Zhang et al. (2021) a model initial on the RoBERTa (base) (Liu et al. 2019b) with special-designed objectives.13
⢠BART (large): Shao et al. (2021) train a large model (12 layer encoder and 12 layer decoder, hidden size 1024, vocabulary size 21128) with denoising auto-encoding (DAE) objective. This model is trained on the open source large-scale raw text, Chinese Wikipedia, and a part of WuDaoCorpus. The training data contains 200GB cleaned text ranging from different domains.14
⢠mBART (CC25): Pires, Schlinger, and Garrette (2019) use a large model (12 layer encoder and 12 layer decoder, hidden size 1024, vocabulary size 250,000), trained with 25 language web corpus. This model is trained from scratch.15
⢠GPT2: Zhao et al. (2019) train a 12-layer decoder-only Transformers and its vocabulary is size 21,128. This model is trained with the CLUECorpusSmall corpus.16 | 2307.08074#53 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.08074 | 54 | ⢠GPT-3.5 & GPT-4: ChatGPTis an intelligent chatting machine developed by OpenAI upon the InstructGPT (Ouyang et al. 2022), which is trained to follow an instruction in a prompt and provide a detailed response. All corresponding results were obtained from ChatGPT API in June 2023.17
# 6.2 Main Results
Table 7 lists the results on the proposed benchmarks, where several observations can be made. Concerning the existing pretrained models, pretraining improves performance over plain models in all tasks, which is consistent with previous studies. These results validate that the proposed benchmarks are reasonable. We evaluated the encoder- only architecture (e.g., models similar to BERT) on tasks involving comprehension and translation. We also assessed the decoder-only architecture (e.g., GPT-2) on tasks requiring generation, and the encoder-decoder architecture (e.g., BART) on all tasks. The reason some architectures were not tested on certain tasks is due to our preliminary experiences showing subpar performance in those particular tasks. | 2307.08074#54 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 55 | [6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. | 2307.07924#55 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 55 | # 9https://huggingface.co/bert-base-chinese. 10https://huggingface.co/hfl/chinese-roberta-wwm-ext/tree/main. 11https://huggingface.co/hfl/chinese-roberta-wwm-ext. 12https://github.com/ttzHome/AnchiBERT. 13https://huggingface.co/Langboat/mengzi-bert-base. 14https://huggingface.co/fnlp/bart-base-chinese. 15https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz 16https://github.com/CLUEbenchmark/CLUECorpus2020. 17https://platform.openai.com.
17
# Preprint
Preprint
Volume 1, Number 1
Table 7: Performance of baseline models on Disco-Bench benchmark. A similar table is presented on the online platform. Bold denotes the best result in each column. SI and ZPR are measured by F1 while MRC by accuracy. We report BLEU for NT, CCT, PT and TE, and BERTscore for others. | 2307.08074#55 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 56 | [7] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[8] Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023.
[9] Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281, 2023.
[10] Juan de Vicente Mohino, Javier Bermejo Higuera, Juan Ramón Bermejo Higuera, and Juan An- tonio Sicilia Montalvo. The application of a new secure software development life cycle (s-sdlc) with agile methodologies. Electronics, 8(11):1218, 2019.
[11] Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. Self-collaboration code generation via chatgpt, 2023. | 2307.07924#56 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 56 | Model Understanding Translation Generation SIâ ZPRâ MRCâ NTâ CCTâ PTâ TEâ TIâ TCâ Plain Models Transformer (base) Transformer (big) 9.1 4.4 10.8 11.1 38.2 38.7 22.1 22.5 32.5 33.5 4.3 4.3 24.9 29.6 58.1 58.5 58.2 59.9 BERT (base) AnchiBERT (base) MengziBERT (base) RoBERTa (base) RoBERTa (large) 85.1 81.3 86.9 86.3 88.7 Existing Pretrained Models 24.5 23.2 31.5 28.5 33.0 51.6 46.3 51.0 51.0 55.9 22.8 22.1 21.2 21.9 20.8 42.5 42.6 42.3 42.3 44.2 6.1 6.1 5.5 5.8 5.7 - - - - - - - - - - - - - - - GPT-2 BART (large) mBART (CC25) - 86.5 - - 32.8 - - 50.2 - - 21.7 24.0 - 43.3 - - 7.3 12.6 30.0 33.8 - 59.4 | 2307.08074#56 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 57 | [11] Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. Self-collaboration code generation via chatgpt, 2023.
[12] Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. CoRR, abs/2305.14325, 2023.
16
[13] Saad Ezzini, Sallam Abualhaija, Chetan Arora, and Mehrdad Sabetzadeh. Automated handling of anaphoric ambiguity in requirements: A multi-solution study. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 187â199. ACM, 2022.
[14] Peter Freeman, Donald J. Bagert, Hossein Saiedian, Mary Shaw, Robert Dupuis, and J. Barrie Thompson. Software engineering body of knowledge (SWEBOK). In Proceedings of the 23rd International Conference on Software Engineering, ICSE 2001, 12-19 May 2001, Toronto, Ontario, Canada, pages 693â696. IEEE Computer Society, 2001. | 2307.07924#57 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 57 | - - 32.8 - - 50.2 - - 21.7 24.0 - 43.3 - - 7.3 12.6 30.0 33.8 - 59.4 62.2 - 57.6 60.3 - Disco-Bench Pretrained Models RoBERTa (base) RoBERTa (large) 87.7 89.6 31.2 34.3 50.0 56.7 22.8 21.6 46.6 44.0 6.6 7.2 - - - - - - GPT-2 BART (large) mBART (CC25) - 86.6 - - 33.5 - - 50.3 - - 23.2 24.3 - 43.8 - - 7.1 13.9 32.5 36.2 - 59.7 62.4 - 60.2 60.7 - Large Language Models GPT-3.5 GPT-4 78.7 84.9 13.5 9.7 48.6 63.2 22.5 24.0 22.2 27.6 8.1 9.1 24.2 27.1 59.7 60.4 59.0 59.6 | 2307.08074#57 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 58 | [15] Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation with self-play and in-context learning from AI feedback. CoRR, abs/2305.10142, 2023.
[16] Sa Gao, Chunyang Chen, Zhenchang Xing, Yukun Ma, Wen Song, and Shang-Wei Lin. A neural model for method name generation from functional description. In 26th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2019, Hangzhou, China, February 24-27, 2019, pages 411â421. IEEE, 2019.
[17] Robert L. Glass, Iris Vessey, and Venkataraman Ramesh. Research in software engineering: an analysis of the literature. Information and Software technology, 44(8):491â506, 2002.
[18] Fred J Heemstra. Software cost estimation. Information and software technology, 34(10):627â 639, 1992.
[19] Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, ICPC 2018, Gothenburg, Sweden, May 27-28, 2018, pages 200â210. ACM, 2018. | 2307.07924#58 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 58 | Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. 2022. Black-box tuning In International for language-model-as-a-service. Conference on Machine Learning, ICML 2022, 17- 23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 20841â20855. PMLR.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford al- paca: An instruction-following llama model. https: //github.com/tatsu-lab/stanford_alpaca.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efï¬cient foundation language mod- els. CoRR, abs/2302.13971. | 2307.08072#58 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 58 | Among the BERT variants with the base setting, AncientBERT trained on small-scale classical Chinese data outperforms other models on CCT and PT, demonstrating the necessity of bridging the domain gap. Enlarging the model capacity usually improves performance (e.g. RoBERTa from base to large setting). The GPT-2 model exhibits superior performance on TE and TI tasks compared to the plain Transformer model, but its performance is inferior on the TC task. The BART model excels in all generation tasks, underscoring the efï¬cacy of the encoder-decoder architecture in such tasks. Pre-training with multilingual data, such as in the mBART model, can yield a more substantial improvement in translation quality than BART, particularly evident in NT and PT tasks. Clearly, ï¬ne-grained pretraining on Disco-Bench data outperforms their coarse- grained counterparts, demonstrating the effectiveness and necessity of modeling dis- course information. The RoBERTa models work better on language understanding tasks, and the BART variants produce superior performances on the language translation and generation tasks. | 2307.08074#58 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 59 | [20] Xing Hu, Xin Xia, David Lo, Zhiyuan Wan, Qiuyuan Chen, and Thomas Zimmermann. Prac- In 44th IEEE/ACM 44th titionersâ expectations on automated code comment generation. International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 1693â1705. ACM, 2022.
[21] Magne Jorgensen and Martin Shepperd. A systematic review of software development cost estimation studies. IEEE Transactions on Software Engineering, 33(1):33â53, 2007.
[22] Rafiq Ahmad Khan, Siffat Ullah Khan, Habib Ullah Khan, and Muhammad Ilyas. Systematic literature review on security risks and its practices in secure software development. ieee Access, 10:5456â5481, 2022.
[23] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023. | 2307.07924#59 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 59 | Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
Zhewei Yao, Cheng Li, Xiaoxia Wu, Stephen Youn, and Yuxiong He. 2023a. A comprehensive study on post-training quantization for large language models. arXiv preprint arXiv:2303.08302.
Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, and Yuxiong He. 2023b. Zeroquant-v2: Exploring post-training quantization in llms from comprehen- sive study to low rank compensation. | 2307.08072#59 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 59 | We also conducted tests on two LLMs, GPT-3.5 and GPT-4. Although ChatGPT has shown substantial proï¬ciency in long-text NLP tasks (Wang et al. 2023b), it does not quite measure up to the performance of Disco-Benchâs pretrained models across the majority
18
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
Table 8: More results on understanding tasks using additional evaluation metrics, including Exact Match, Precision, and Recall. This is complementary to Table 7. | 2307.08074#59 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 60 | [24] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. CAMEL: communicative agents for "mind" exploration of large scale language model society. CoRR, abs/2303.17760, 2023.
[25] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi-agent debate. CoRR, abs/2305.19118, 2023.
[26] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. CoRR, abs/2305.16960, 2023.
[27] Cuauhtémoc López MartÃn and Alain Abran. Neural networks for predicting the duration of new software projects. J. Syst. Softw., 101:127â135, 2015. | 2307.07924#60 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 60 | Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Be- ichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. CoRR, abs/2303.18223.
0.3 0.2 01 0.0 (Gm Accuracy âeâ Footprint 10 5 6 4 2 > S 8 ets ee
(Accuracy âeâ Footprint 0.08 10 8 4.06 6 0.04 4 002 2 0001 â â ~ Lasts cee
PPL âeâ Footprint 10 3000 8 2000 6 4 1000 2 > x0 Lt st ee
(a) 7B-MMLU (5-shot) (b) 7B-GSM8K (CoT) (c) 7B-WikiText (d) 13B-MMLU (5-shot) (e) 13B-GSM8K (CoT) (f) 13B-WikiText | 2307.08072#60 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 60 | Table 8: More results on understanding tasks using additional evaluation metrics, including Exact Match, Precision, and Recall. This is complementary to Table 7.
Model SI Exact Matchâ ZPR Precisionâ Recallâ Transformer (base) Transformer (big) Plain Models 0.3 0.1 10.2 10.5 11.5 11.9 Existing Pretrained Models BERT (base) AnchiBERT MengziBERT RoBERTa (base) RoBERTa (large) 81.9 76.9 84.0 83.4 85.9 26.1 22.1 36.6 29.0 39.3 31.0 24.6 29.6 29.9 28.7 BART (large) 83.7 38.3 30.2 Disco-Bench Pretrained Models RoBERTa (base) RoBERTa (large) 85.2 87.2 32.0 38.7 30.6 30.8 BART (large) 84.6 39.0 30.5 | 2307.08074#60 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 61 | [28] Nadia Nahar, Shurui Zhou, Grace A. Lewis, and Christian Kästner. Collaboration challenges in building ml-enabled systems: Communication, documentation, engineering, and process. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 413â425. ACM, 2022.
17
[29] Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[30] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022. | 2307.07924#61 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 61 | 0.4 0.3 0.2 01 0.0 » â0 gh Rod â ys Ae gs (Accuracy âeâ Footprint 18
| ww 0 sto » (Accuracy âeâ Footprint 18 0.08 n° 2 0.04 6 0.02
» a ae uo 18 12 PPL âeâ Footprint 150 100 6 *0
Figure 4: Impacts of different model components or substructures on MMLU (5-shot), GSM8K and WikiText. The memory footprint is counted in GiB (in green dotted lines).
# A Appendix
# Impacts of Model Components
We provide more details about the impacts of model components or substructures on MMLU (5-shot), GSM8K and WikiText in Figure 4.
# A.2 Case Study
Here, we present case studies for the performance of quantized LLaMA models on MMLU, GSM8K and AutoEval datasets. The results involve model scale of 7B (Table 6), 13B (Table 7) and 30B (Ta- ble 8) | 2307.08072#61 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 61 | of Disco-Bench tasks. Speciï¬cally, GPT-4 achieves the highest MRC accuracy among all models and delivers competitive performance on the TI and TC tasks. However, both LLMs lag behind in other task categories. Furthermore, GPT-4 surpasses the performance of GPT-3.5, afï¬rming recent research ï¬ndings about LLMs. These results underline the challenge and the necessity of our proposed Disco-Bench benchmark. It serves as a robust tool to evaluate the abilities of different models, particularly in terms of discourse awareness, providing valuable insights for future model development.
# 6.3 Results on Additional Evaluation Metrics
A single automatic evaluation metric might not provide a comprehensive depiction of a modelâs performance. We report the results on several additional evaluation metrics. | 2307.08074#61 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 62 | [31] Mohd. Owais and R. Ramakishore. Effort, duration and cost estimation in agile software development. In 2016 Ninth International Conference on Contemporary Computing (IC3), pages 1â5, 2016.
[32] Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023.
[33] Joon Sung Park, Joseph C. OâBrien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. CoRR, abs/2304.03442, 2023.
[34] Florian Pudlitz, Florian Brokhausen, and Andreas Vogelsang. Extraction of system states from natural language requirements. In 27th IEEE International Requirements Engineering Conference, RE 2019, Jeju Island, Korea (South), September 23-27, 2019, pages 211â222. IEEE, 2019.
[35] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. | 2307.07924#62 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 62 | 7B Tasks Bits Cases MMLU Question:Suppose now that a researcher wishes to use information criteria to determine the optimal lag length for a VAR. 500 observations are available for the bi-variate VAR, and the values of the determinant of the variance-covariance matrix of residuals are 0.0336, 0.0169, 0.0084, and 0.0062 for 1, 2, 3, and 4 lags respectively. What is the optimal model order according to Akaikeâs information criterion? 16 4 2 Answer:D Answer:D Answer:4 1 and, a GSM8K Question:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. How many letters are in Indras and her sisterâs names? 16 4 2 Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 6 + 4 = 10 letters. Indrasâ name has 6 letters. So Indrasâ name has 6 - 10 = -4 letters. The answer is -4. Answer:Indras has 6 letters in her | 2307.08072#62 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 62 | # 6.3 Results on Additional Evaluation Metrics
A single automatic evaluation metric might not provide a comprehensive depiction of a modelâs performance. We report the results on several additional evaluation metrics.
Understanding Tasks. Table 8 presents additional evaluation metrics for understanding tasks, including Exact Match (whether the systemâs response exactly matches the correct answer) for SI and both Precision (how many of the predicted positive responses were actually positive) and Recall (how many of the actual positive responses were correctly identiï¬ed by the system) for ZPR. The performance of the Disco-Bench pretrained RoBERTa (large) model according to additional metrics is consistently superior and comparable to the other models. This corroborates our conclusions drawn from the main evaluation metrics. Notably, the existing pretrained RoBERTa (large) model shows the highest Precision at 39.3 on the ZPR task.
Translation Tasks. Table 9 provides supplementary evaluation metrics for translation tasks, comprising TER (measuring the number of edits required to change a systemâs output into one of the references), METEOR (considering precision and recall, synonymy, stemming, and phrase-level matches to create an F-score-like composite of these factors),
19
Preprint
Volume 1, Number 1
Table 9: More results on translation tasks using additional evaluation metrics, including TER, METEOR and COMET. This is complementary to Table 7. | 2307.08074#62 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 63 | [36] Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, and Zeynep Akata. In-context impersonation reveals large language modelsâ strengths and biases. CoRR, abs/2305.14930, 2023.
[37] Yashar Talebirad and Amirhossein Nadiri. Multi-agent collaboration: Harnessing the power of intelligent LLM agents. CoRR, abs/2306.03314, 2023.
[38] Hannes Thaller, Lukas Linsbauer, and Alexander Egyed. Feature maps: A comprehensible software representation for design pattern detection. In 26th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2019, Hangzhou, China, February 24-27, 2019, pages 207â217. IEEE, 2019.
[39] Chengcheng Wan, Shicheng Liu, Sophie Xie, Yifan Liu, Henry Hoffmann, Michael Maire, and Shan Lu. Automated testing of software that uses machine learning apis. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 212â224. ACM, 2022. | 2307.07924#63 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 63 | Indrasâ name has 6 letters. So Indrasâ name has 6 - 10 = -4 letters. The answer is -4. Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 6 + 4 = 10 letters. Indras has 6 letters. Her sisterâs name has 10 letters. The answer is 10. M.H. or 4 weeks.. Answer:B. 82. 30.. 44. 4........ which 4b. J.o4.b.gone and the best of the best. ungers are the best Cash.. . 6 days are.. Cash.B.Age.B.G.o.just.J.1Age. Question:How can I increase my productivity while working from home? AutoEval 16 4 2 Answer: Here are some tips to help you increase your productivity while working from home: 1. Set work hours and stick to them. 2. Have a designated work area. 3. Avoid distractions. 4. Set a schedule and stick to it... Answer:Itâs important to know what you want to get done and to set priorities. Knowing what to do and where to start is key to staying productive. Here are some tips to help you stay productive: 1. Work | 2307.08072#63 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 63 | Model NT TERâ MET.â COM.â CCT TERâ COM.â PT TERâ MET.â COM.â Plain Models Transformer (base) Transformer (big) 74.3 73.3 20.6 20.9 0.74 0.75 98.5 98.4 0.65 0.65 114.1 112.9 7.4 7.9 0.48 0.49 BERT (base) AnchiBERT MengziBERT RoBERTa (base) RoBERTa (large) 73.7 74.1 76.5 74.1 75.1 Existing Pretrained Models 21.1 20.7 20.5 20.5 19.6 0.74 0.74 0.74 0.75 0.72 95.8 95.9 96.0 96.2 94.8 0.65 0.67 0.67 0.65 0.68 105.9 100.1 105.5 104.7 99.6 10.4 10.4 8.9 9.1 9.4 0.52 0.53 0.51 0.51 0.50 BART (large) mBART(CC25) 75.6 71.9 21.1 22.2 0.74 0.77 96.5 - 0.65 - 100.8 88.2 11.1 14.7 0.54 | 2307.08074#63 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 64 | [40] Yao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S. Yu. Im- proving automatic source code summarization via deep reinforcement learning. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3-7, 2018, pages 397â407. ACM, 2018.
[41] Lei Wang, Jingsen Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, and Ji-Rong Wen. Recagent: A novel simulation paradigm for recommender systems. CoRR, abs/2306.02552, 2023.
[42] Song Wang, Taiyue Liu, and Lin Tan. Automatically learning semantic features for defect prediction. In Proceedings of the 38th International Conference on Software Engineering, ICSE 2016, Austin, TX, USA, May 14-22, 2016, pages 297â308. ACM, 2016. | 2307.07924#64 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 64 | you want to get done and to set priorities. Knowing what to do and where to start is key to staying productive. Here are some tips to help you stay productive: 1. Work in a well-lit area thatâs free from distractions. 2. Set a reasonable time limit for yourself. 3. Set a specific goal for each day... Answer:The best The Behavre when L9Life of. Their, Be in the World Op LoDur and Liflo the attitudeFHHow the of them onlineDo ld vs Life in the future play thatâ but the process of, and is a habit of, the pro will be the toys, but-fold to-magnet-H of who and ISP a book. Score 35.2 34.2 3.8 13.11 12.21 0 1121 1058 608 | 2307.08072#64 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.07924 | 65 | [43] Song Wang, Nishtha Shrestha, Abarna Kucheri Subburaman, Junjie Wang, Moshi Wei, and Nachiappan Nagappan. Automatic unit test generation for machine learning libraries: How far are we? In 43rd IEEE/ACM International Conference on Software Engineering, ICSE 2021, Madrid, Spain, 22-30 May 2021, pages 1548â1560. IEEE, 2021.
18
[44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022.
[45] Jimmy Wei, Kurt Shuster, Arthur Szlam, Jason Weston, Jack Urbanek, and Mojtaba Komeili. Multi-party chat: Conversational agents in group settings with humans and models. CoRR, abs/2304.13835, 2023. | 2307.07924#65 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 65 | and COMET (learned metric trained on human translation ranking data, which captures more nuanced, semantic comparisons and is less reliant on surface-level text matches). Notably, there are no resources available for Classical Chinese in the METEOR evaluation. When observing the performance across NT and PT tasks, the Disco-Bench pretrained mBART model outshines all others across all three metrics, reinforcing its top-ranking performance as indicated by the BLEU scores. However, the metrics TER and COMET display inconsistent performances when applied to the CCT task, thereby illustrating the inherent challenges in evaluating such tasks.
Generation Tasks. Table 10 introduces additional evaluation metrics for generation tasks, comprising PPL 18 (perplexity is a measurement of how well a probability distribution or probability model predicts a sample), BLEU (evaluating the quality of text which has been machine-generated based on reference), and Dist-n (calculating the number of unique n-grams divided by the total number of n-grams in the generated text). As seen these metrics exhibit varying performances, highlighting the complexities and challenges associated with the automatic evaluation of generation tasks. Dist-2 and Dist-4 exhibit consistent performance in line with the primary metric, BERTscore. Conversely, the performances of PPL and BLEU metrics are notably unstable. | 2307.08074#65 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 66 | [46] Jonas Winkler, Jannis Grönberg, and Andreas Vogelsang. Predicting how to test requirements: An automated approach. In Software Engineering 2020, Fachtagung des GI-Fachbereichs Softwaretechnik, 24.-28. Februar 2020, Innsbruck, Austria, volume P-300 of LNI, pages 141â 142. Gesellschaft für Informatik e.V., 2020.
[47] Tianming Zhao, Chunyang Chen, Yuanning Liu, and Xiaodong Zhu. GUIGAN: learning to generate GUI designs using generative adversarial networks. In 43rd IEEE/ACM International Conference on Software Engineering, ICSE 2021, Madrid, Spain, 22-30 May 2021, pages 748â760. IEEE, 2021.
19
# A Appendix
Role Specialization | 2307.07924#66 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 66 | 13B Tasks Bits Cases MMLU Question:Suppose now that a researcher wishes to use information criteria to determine the optimal lag length for a VAR. 500 observations are available for the bi-variate VAR, and the values of the determinant of the variance-covariance matrix of residuals are 0.0336, 0.0169, 0.0084, and 0.0062 for 1, 2, 3, and 4 lags respectively. What is the optimal model order according to Akaikeâs information criterion? 16 4 2 Answer:A Answer:A Answer:C. 1, 2, GSM8K Question:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. How many letters are in Indras and her sisterâs names? 16 4 2 Answer: Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 6 + 4 = 10 letters. The answer is 10. Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. | 2307.08072#66 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 66 | 18We use GPT2 language model to compute PPL. For TI and TE tasks, we use âIDEA-CCNL/Wenzhong- GPT2-110Mâ.
20
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
Table 10: More results on generation tasks using additional evaluation metrics, including BLEU, PPL, Dist-2 and Dist-4. This is complementary to Table 7.
Model TE PPLâ BLEUâ TI PPLâ Dist-2â Dist-4â BLEUâ TC PPLâ Dist-2â Dist-4â BART (large) GPT-2 63.1 70.1 3.7 1.6 Existing Pretrained Models 8.4 11.2 0.20 0.18 0.63 0.54 2.7 2.1 3.8 2.7 0.07 0.03 0.42 0.17 Disco-Bench Pretrained Models BART (large) GPT-2 49.2 67.5 3.7 2.2 8.8 11.5 0.19 0.27 0.65 0.84 2.9 4.7 3.3 3.9 0.05 0.08 0.29 0.51 | 2307.08074#66 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 67 | : I am the CEO of ChatDev. My main responsibilities include being an active decision- maker on usersâ demands and other key policy issues, leader, manager, and executor. My decision-making role involves high-level decisions about policy and strategy; and my com- municator role can involve speaking to the organizationâs management and employees.
: I am the CPO of ChatDev. I am responsible for all product-related matters in ChatDev. Usually includes product design, product strategy, product vision, product innovation, project management and product marketing.
# y
: I am the CTO of ChatDev. I am very similar to information technology. I will make high-level decisions for the overarching technology infrastructure that closely align with the organizationâs goals, while I work alongside the organizationâs information technology staff members to perform everyday operations.
R.
: I am a professional programmer of ChatDev. I can write/create computer software or applications by providing a specific programming language to the computer. I have extensive computing and coding experience in many varieties of programming languages and platforms, such as Python, Java, C, C++, HTML, CSS, JavaScript, XML, SQL, PHP, etc,.
9.
: I am a code reviewer of ChatDev. I can help programmers to assess source codes for software troubleshooting, fix bugs to increase code quality and robustness, and offer proposals to improve the source codes.
# y | 2307.07924#67 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 67 | The answer is 10. Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 6 + 4 = 10 letters. The answer is 10. Answer:The answer is 15. Question: 15 -. The answer is 15. The answer is 15. The answer is 15. The answer is 15. Question:How can I increase my productivity while working from home? AutoEval 16 4 2 Answer: Here are some good tips to increase your productivity while working from home: 1. Get dressed for work: Taking the time to get dressed and shower can set a positive tone for your workday. 2. Have a dedicated and organized workspace: Having a space that is only dedicated to work can help you focus on your tasks... Answer:Here are four ways you can increase your productivity while working from home: 1. Set a productivity schedule: Set a schedule for your day that includes work and breaks, and stick to it. 2. Get out of the house: Go to a coffee shop or a library to get out of the house and into a different environment... Answer:The energy of the energy of the Renewable energy of the energy of the Renewable energy of the Renewable energy of the Renewable energy of the | 2307.08072#67 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.07924 | 68 | : I am a code tester of ChatDev. I can use the software as intended to analyze its functional properties, design manual and automated test procedures to evaluate each software product, build and implement software evaluation test programs, and run test programs to ensure that testing protocols evaluate the software correctly.
: I am a artistic designer of ChatDev. I direct ChatDevâs creative softwares and develop the artistic design strategy that defines the companyâs brand. I create the unique image or music of our producted softwares and deliver this distinctive design to consumers to create a clear brand image which is a fundamental and essential work throughout the company.
20
Designing: Modality and Language
¢. y
# : CEO (Chief Executive Officer)
# : CTO (Chief Technology Officer)
6.
# : CPO (Chief Product Officer)
: To satisfy the new userâs demand and the product should be realizable via the python programming language, you should keep discussing with me to decide which product modality do we want the product to be? Once we all have expressed our opinion(s) and agree with the results of the discussion unanimously, any of us must actively terminate the discussion by replying with a single word <END>, followed by our final product modality, e.g., â<END> PowerPointâ. | 2307.07924#68 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 68 | Type Models Rep. Syn. Con. Ref. Sub. Ell. Understanding (MRC) RoBERTa (large) + Disco-Bench Pretrain GPT-3.5 GPT-4 66.7 68.8 27.1 31.3 61.4 66.3 38.6 24.1 68.0 63.4 33.5 21.0 64.0 58.3 25.8 21.6 69.8 59.5 49.2 39.7 25.0 62.5 12.5 25.0 Translation (NT) mBART (CC25) + Disco-Bench Pretrain GPT-3.5 GPT-4 94.0 96.0 32.0 62.0 85.3 88.2 59.4 85.3 92.7 95.0 24.4 45.1 95.9 96.7 26.0 71.6 83.3 86.7 44.8 58.6 76.5 76.5 37.3 41.2 Generation (TC) BART(large) + Disco-Bench Pretrain GPT-3.5 GPT-4 89.5 90.8 26.3 60.5 60.0 84.0 16.0 52.0 91.4 94.3 11.4 11.4 81.9 84.5 10.3 50.9 50.0 56.0 | 2307.08074#68 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 69 | I suggest we design the Gomoku game as a desktop application. This modality will allow us to implement a visualized game that can be played on a computer. It will provide a user-friendly interface and an interactive gaming experience. Additionally, developing the game as a desktop ap- plication will leverage our expertise in programming and ensure that the product is realizable using the Python programming language.
I appreciate your initiative and I agree with your suggestion to design the Gomoku game as a desktop application. This modality aligns with our expertise and allows us to create a visually appealing and interactive game for our users. By leveraging the object-oriented programming language, we can ensure that the product is realizable and maintainable in the long run. Letâs proceed with developing the Gomoku game as a desktop application.
: Great! Iâm glad weâre aligned on developing the Gomoku game as a desktop application.
: <END> Desktop Application.
: To satisfy the new userâs demand and make a desktop application realizable, you should propose a concrete programming language.
: Python with the Pygame library is a suitable choice for implementing a
# basic Gomoku game.
: Great! Python is a versatile and widely-used programming language that will be suitable for implementing a basic Gomoku game. Its simplicity and readability make it a popular choice for beginners and experienced developers alike. | 2307.07924#69 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.07924 | 70 | : Letâs get started with the development of the basic Gomoku game in Python
using the Pygame library!
# : <END> Python.
21
Designing: Brainstorm (via Self-Reflection)
¢.
# : CEO (Chief Executive Officer)
6.
# : CPO (Chief Product Officer)
ee =":
: To satisfy the new userâs demand and let our designed software further surprizes the user, you should propose three fantastic ideas to make the desktop application more creative and imaginative via Python.
: Here are three fantastic ideas to make the Gomoku game software more creative and imaginative:
⢠Theme Customization: Allow users to customize the theme of the game board and pieces.
⢠Avatar Selection: Introduce a feature that allows users to select and customize their own avatars to represent them in the game.
⢠Special Moves: Incorporate special moves that players can unlock or earn during the game.
: To implement these ideas, we can incorporate the following features into the software: providing a theme selection menu where users can choose from a variety of visually appealing themes; and allowing users to customize their avatarsâ appearance, such as changing hairstyles, outfits, and accessories. By incorporating these features, we can create a Gomoku game software that not only provides an engaging gameplay experience but also allows users to express their creativity and imagination. | 2307.07924#70 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 70 | 30B Tasks Bits Cases MMLU Question:Suppose now that a researcher wishes to use information criteria to determine the optimal lag length for a VAR. 500 observations are available for the bi-variate VAR, and the values of the determinant of the variance-covariance matrix of residuals are 0.0336, 0.0169, 0.0084, and 0.0062 for 1, 2, 3, and 4 lags respectively. What is the optimal model order according to Akaikeâs information criterion? 16 4 2 Answer:A Answer:A Answer:C GSM8K Question:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. How many letters are in Indras and her sisterâs names? 16 4 2 Answer: Indras has 6 letters. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 4 more letters than 3. So her sisterâs name has 4 + 3 = 7 letters. Indras has 6 letters and her sister has 7 letters. So the answer is 6 + 7 = 13. Answer:Indras has 6 letters in her | 2307.08072#70 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 70 | # 6.4 Results on Diagnostic Test Suite
We evaluate three existing pretraining models on the diagnostic dataset: RoBERTa (large), BART (large), and mBART (CC25), each of which has exhibited superior performance on their respective representative tasks. â+ Disco-Bench Pretrainâ donates ï¬ne-grained pretraining on Disco-Bench data speciï¬c to each model. Subsequently, every model is ï¬ne-tuned using the training data derived from the corresponding downstream task.
Table 11 records the modelâs ability to rank a correct candidate higher than an incorrect one, revealing an overall accuracy percentage. Disco-Bench pretrained models generally improve the cohesion accuracies over their coarse-grained counterparts, which reconï¬rms our claim that ï¬ne-grained pretraining on Disco-Bench data helps model discourse information. Although the numbers are not comparable across tasks, we ï¬nd that pretraining models on the understanding tasks generally perform worse on discourse modeling. One possible reason is that the understanding tasks are mostly classiï¬cation tasks, whose signals may not be sufï¬cient to guide models to learn discourse information.
21
Preprint
# Preprint
Volume 1, Number 1 | 2307.08074#70 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 71 | : To proceed with the implementation, we will need to assign a development team to work on the software. We will create a project plan, allocate resources, and set milestones to track the progress. Once the software is developed, we can conduct thorough testing to ensure its functionality, performance, and user experience meet the highest standards. We will also create documentation to users to ensure a seamless experience.
I appreciate your thorough approach to the implementation of the proposed features.
: Here is a dialogue between CPO and me: â : · · ·
: · · ·
: · · ·
: · · ·â. Conclude three most creative and imaginative brainstorm ideas from the whole discussion.
:
1. Theme Customization: Allow users to customize the theme of the game board and pieces.
2. Avatar Selection: Introduce a feature that allows users to select and customize their own avatars to represent them in the game.
3. Power-Ups and Special Moves: Incorporate power-ups and special moves that players can unlock or earn during the game.
22
# : Pseudo Self | 2307.07924#71 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 71 | has 4 + 3 = 7 letters. Indras has 6 letters and her sister has 7 letters. So the answer is 6 + 7 = 13. Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 4 more letters than 3. 3 + 4 = 7. So her sisterâs name has 7 letters. Indras has 6 letters and her sister has 7 letters. 6 + 7 = 13. The answer is 13. Answer:Indrasâ name has 6 letters. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So she has 6 - 4 = 4 letters. 4 letters. The answer is 4. Question:How can I increase my productivity while working from home? AutoEval 16 4 Answer: There are several ways to increase your productivity at home. Here are some of the most effective: 1. Make a schedule: Scheduling your day and prioritizing your tasks can help you stay organized and boost your productivity. 2. Take breaks: Taking regular breaks can help keep you fresh and focused... Answer:Here are some ways you can increase your productivity while working from home: 1. Create a separate workspace: One of the biggest challenges | 2307.08072#71 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 71 | 21
Preprint
# Preprint
Volume 1, Number 1
The results on GPT-3.5 and GPT-4 reveal a signiï¬cant performance gap between LLMs and those pretrained with Disco-Bench data, emphasizing the challenge of capturing discourse information.
# 7. Conclusion
This paper introduces a Disco-Bench benchmark for Chinese and/or English that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. We also propose a diagnostic test suite that can examine whether the target models learn discourse knowledge for in-depth linguistic analysis. Extensive experiments demonstrate that ï¬ne-grained pretraining based on document-level training data consistently improves the modeling of discourse information. We offer the datasets, pretrained models, and leaderboards to facilitate research in this ï¬eld.
22
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
# 1. Appendix
# 1.1 Details of ZPR (Zero Pronoun Recovery) Task | 2307.08074#71 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 72 | Coding: Code Generation
: CTO (Chief Technology Officer) : Programmer : We have decided to complete the task through a executable software with multiple files implemented via Python. You should write one or multiple files and make sure that every detail of the architecture is implemented as code. The software should be equiped with graphical user interface (GUI); so you must choose a suitable GUI framework. Think step by step and reason yourself to the right decisions to make sure we get it right. You will first lay out the names of the core classes, functions, methods that will be necessary, as well as a quick comment on their purpose. | 2307.07924#72 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.