doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.08701 | 90 | WizardLM Test Set (Skills Details) Alpaca-13B-9k vs. Claude
120.00% 100.00% 80.00% : 60.00% 40.00% 20.00% 0.00% vs Alpaca-13B-9k / Claude > PF MO eh OS FFA FE SSAA TF SF FOS J os ws Xs ore ee PLP HESS rer FF" Ses vr. as & ee ee ⬠Ss & é Ss g s FX
Figure 28: Compare with Claude-v1. Achieve average 78.41% capacity of ChatGPT on all 29 skills.
160.0%
# WizardLM Test Set (Skills Details) Alpaca-13B-9k vs. Davinci-003
140.00% 2 120.00% 10.00% â_ = 80.00% 60.00% bn § 40.00% & 20.00% 0.00% oy = So ge eS & & Â¥ PD gd & SF se PEE KEKE AEE HHO G CAKE HSV Oe x SF PF EF SEP Hof SP SS Swiâ & ae « oe CLS MES SS es WL KH 3 x SS 5 » s © e <¢ S Ss é ic. &
8 2
# a 2
2 | 2307.08701#90 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 91 | 8 2
# a 2
2
Figure 29: Compare with Davinci-003. Achieve an average 91.11% capacity of ChatGPT on all 29 skills.
31
Preprint
# J HUMAN STUDY
We conduct the human study among three different users. The evaluation interface is shown as Table 15:
Youâll be presented with a series of questions. For each question, two answers will be provided. Your task is to read both answers carefully and decide which one you believe is better. When judging, consider: Relevance: Does the answer directly address the question? Completeness: Is the answer comprehensive? Coherence: Is the answer logically structured and easy to understand? Accuracy: Is the information provided in the answer correct?
# Question: <QUESTION>
Answer A: <ANSWER A> Answer B: <ANSWER B>
Comparing these two answers, which answer is better? 1. Answer A is significantly better. 2. Answer B is significantly better. 3. Neither is significantly better.
Table 15: Human annotation interface.
We show more detailed results of human evaluations in Fig. 30.
Human Study:Alpagasus-13B(9k) vs. Alpaca-13B(52k)
Vicuna Koala @ Tie WizardLM @ Self-Instruct
# Alpagasus-9k wins
# Alpaca-52k wins
Figure 30: The detailed results of human study.
# K LIMITATIONS | 2307.08701#91 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08701 | 92 | # Alpagasus-9k wins
# Alpaca-52k wins
Figure 30: The detailed results of human study.
# K LIMITATIONS
Model Size. In our experiments, we evaluated our IFT strategy by training models of two different sizes, 7B and 13B, since they are the most common sizes for recent open-source LLMs. We plan to extend this study to larger model sizes such as 33B, 65B, or even 175B, and verify whether the same conclusion still holds, i.e., a small subset of high-quality data selected by our method can improve the instruction-finetuned model. We leave analysis on the IFT of larger models as future work.
32 | 2307.08701#92 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.07924 | 0 | 3 2 0 2 c e D 9 1
] E S . s c [
4 v 4 2 9 7 0 . 7 0 3 2 : v i X r a
# Communicative Agents for Software Development
Chen Qian® XinCong® WeiLiu® Cheng Yang* Weize Chen® Yusheng Su® Yufan Dang* JiahaoLi* JuyuanXu4 DahaiLi* Zhiyuan Liué®⢠Maosong Sun®⢠*Tsinghua University Beijing University of Posts and Telecommunications Dalian University of Technology âBrown University _*Modelbest Inc. [email protected] [email protected] [email protected]
Software See Ce 2 Document ing Designing Test ing Coding
Figure 1: ChatDev, our virtual chat-powered company for software development, brings together "software agents" from diverse social identities, including chief officers, professional programmers, test engineers, and art designers. When presented with a preliminary task by a human âclientâ (e.g., âdevelop a gomoku gameâ), the software agents at ChatDev engage in effective communication and mutual verification through collaborative chatting. This process enables them to automatically craft comprehensive software solutions that encompass source codes, environment dependencies, and user manuals.
# =: Corresponding Authors.
Preprint.
# Abstract | 2307.07924#0 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 0 | 3 2 0 2
l u J 6 2 ] L C . s c [
2 v 2 7 0 8 0 . 7 0 3 2 : v i X r a
Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study Peiyu Liu1,2, Zikang Liu1,2, Ze-Feng Gao1, Dawei Gao3, Wayne Xin Zhao1,2â, Yaliang Li3, Bolin Ding3, and Ji-Rong Wen1,2,4 1 Gaoling School of Artiï¬cial Intelligence, Renmin University of China 2 Beijing Key Laboratory of Big Data Management and Analysis Methods 3 Alibaba Group, 4 School of Information, Renmin University of China [email protected],[email protected],[email protected] {zfgao,jrwen}@ruc.edu.cn,{gaodawei.gdw,yaliang.li,bolin.ding}@alibaba-inc.com
# Abstract | 2307.08072#0 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.07924 | 1 | Software engineering is a domain characterized by intricate decision-making pro- cesses, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby elimi- nating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and document- ing. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable | 2307.07924#1 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 1 | Despite the superior performance, Large Lan- guage Models (LLMs) require signiï¬cant com- putational resources for deployment and use. To overcome this issue, quantization methods have been widely applied to reduce the mem- ory footprint of LLMs as well as increasing the inference rate. However, a major challenge is that low-bit quantization methods often lead to performance degradation. It is important to understand how quantization impacts the capacity of LLMs. Different from previous studies focused on overall performance, this work aims to investigate the impact of quan- tization on emergent abilities, which are im- portant characteristics that distinguish LLMs from small language models. Specially, we examine the abilities of in-context learning, chain-of-thought reasoning, and instruction- following in quantized LLMs. Our empiri- cal experiments show that these emergent abil- ities still exist in 4-bit quantization models, while 2-bit models encounter severe perfor- mance degradation on the test of these abilities. To improve the performance of low-bit models, we conduct two special experiments: (1) ï¬ne- gained impact analysis that studies which com- ponents (or substructures) are more sensitive to | 2307.08072#1 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 1 | Modeling discourse â the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) ï¬ne-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can signiï¬cantly facilitate research in this ï¬eld: https://github.com/longyuewangdcu/Disco-Bench.
# 1. Introduction | 2307.08074#1 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 2 | allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for inte- grating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev. | 2307.07924#2 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 2 | models, we conduct two special experiments: (1) ï¬ne- gained impact analysis that studies which com- ponents (or substructures) are more sensitive to quantization, and (2) performance compen- sation through model ï¬ne-tuning. Our work derives a series of important ï¬ndings to under- stand the impact of quantization on emergent abilities, and sheds lights on the possibilities of extremely low-bit quantization for LLMs. | 2307.08072#2 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 2 | # 1. Introduction
To evaluate the general performance of models, previous work proposed a variety of benchmarks, covering different tasks and languages such as GLUE (Wang et al. 2018a), CLUE (Xu, Zhang, and Dong 2020) and XGLUE (Liang et al. 2020). However, existing benchmarks pay little attention to discourse phenomena, which is a fundamental and challenging problem in natural language processing (NLP) (Kevitt, Partridge, and Wilks 1992). The natural language generally consists of meaningful, uniï¬ed, and purposive groups of sentences, which are organized as a whole according to discourse properties (Cook 1989). As shown in Figure 1, the discourse property manifests in two ways: (1) cohesion, where the dependency between words or phrases makes them logically and consistently connected; (2) coherence, where the structural relation between segments or sentences enables them semantically and meaningfully composed.
*Viseen Building, Gaoxin 10th South Road, Nanshan District, Shenzhen, China. E-mail: [email protected].
© 2023 Preprint
# Preprint
Preprint
Volume 1, Number 1 | 2307.08074#2 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 3 | # Introduction
âCollaboration allows us to know more than we are capable of knowing by ourselves. It empowers us to think differently, access information we wouldnât have otherwise, and combine ideas as we work together towards a shared goal.â
ââPaul Solarz
Software engineering entails a methodical and disciplined approach to the development, operation, and maintenance of software systems [4]. However, the complexity of software intelligence often leads to decisions based on intuition and limited consultation with senior developers [14]. Recent advancements in deep learning techniques have prompted researchers to explore their application in software engineering, aiming to improve effectiveness, efficiency, and cost reduction . Prior studies in deep learning-based software engineering have addressed various tasks, categorized as software requirements, design, implementation, testing, and maintenance [34; 29]. The software development process involves multiple roles, including organizational coordination, task allocation, code writing, system testing, and documentation preparation. It is a highly complex and intricate activity that demands meticulous attention to detail due to its long development cycles [17; 4]. | 2307.07924#3 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 3 | 1
# 1 Introduction
Recently, Artiï¬cial Intelligence (AI) has witnessed remarkable progress due to the emergence of Large Language Models (LLMs) (Brown et al., 2020; Zhao et al., 2023). Compared with small-sized language models, LLMs, which largely scale the model size and training corpus size, have exhibited very different behaviors when elicited by specially
âCorresponding author.
designed prompts. Generally, LLMs can acquire more superior abilities, such as in-context learn- ing (ICL, Brown et al. 2020) and chain-of-thought reasoning (CoT, Wei et al. 2022), which may not be present in small-sized language models. Such abilities are often formally called emergent abili- ties (Wei et al., 2022)1. | 2307.08072#3 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 3 | © 2023 Preprint
# Preprint
Preprint
Volume 1, Number 1
BD Fy Audi is an automaker that makes /uxury cars. 2 9 ¢ Ss g 55 It was established by August Horch in 1910. a 8 3 The company now produces cars of outstanding quality. Cohesion Coherence bo] 5 ââ > Anaphora ry <+â > _ Coreference Structure Relation a ©âe Repetition
Figure 1: Discourse deï¬nition and example.
Literary texts, including novels, essays, and poetry, are pivotal discourse-aware NLP benchmarks due to their substantial volume and unique linguistic characteristics. Their complex structures, rich vocabularies, and varied syntax present a comprehensive testbed for advanced NLP tasks, stretching the capabilities of the technology. Additionally, they offer a wealth of contextual and intertextual information that facilitates complex NLP tasks like context understanding and story generation.
To bridge the gap, we introduce a Disco-Bench benchmark for the target evaluation
on the discourse modeling. Disco-Bench comprises three parts: ⢠Disco-Bench Benchmark: It consists of nine Chinese/English discourse-aware tasks covering a broad range of NLP tasks (understanding, translation, and generation), data quantities (from 26.4K to 2.4M), and difï¬culties. Besides, most benchmarking datasets are newly created in this work. | 2307.08074#3 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 4 | In recent years, large language models (LLMs) have achieved significant milestones in the field of natural language processing (NLP) [5] and computer vision (CV) [35]. After training on massive corpora using the ânext word predictionâ paradigm, LLMs have demonstrated impressive performance on a wide range of downstream tasks, such as context-aware question answering, machine translation, and code generation. In fact, the core elements involved in software development, namely codes and documents, can both be regarded as âlanguageâ (i.e., sequences of characters) [7]. From this perspective, this paper explores an end-to-end software development framework driven by LLMs, encompassing requirements analysis, code development, system testing, and document generation, aiming to provide a unified, efficient, and cost-effective paradigm for software development.
Directly generating an entire software system using LLMs can result in code hallucinations to a certain extent, similar to the phenomenon of hallucination in natural language knowledge querying [2]. These hallucinations include incomplete implementation of functions, missing dependencies, and potential undiscovered bugs. Code hallucinations arise primarily due to two reasons. Firstly,
2 | 2307.07924#4 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 4 | Despite the superior performance, it is very costly to deploy LLMs in real-world applications due to the huge model size. Faced with this issue, model quantization (Dettmers et al., 2022; Frantar et al., 2022; Yao et al., 2023a) has become a widely used approach to reducing the memory footprint of LLMs. The essential idea of quantization is to map ï¬oating-point numbers into low-bit integers (e.g., BF16 to INT8), so as to reduce the total model bits. Typically, existing methods take a post-training quantization (PTQ) approach (Frantar et al., 2022; Dettmers et al., 2022) without retraining the model parameters. However, existing PTQ methods of- ten suffer from performance degradation in low-bit quantization. | 2307.08072#4 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 4 | ⢠Disco-Bench Diagnostic Dataset: To understand the discourse information learned by models, It also includes a dataset of hand-crafted 1,294 examples for probing trained models. Each instance in the dataset is a contrastive pair, where the correct candidate is the original instance in the benchmark and the incorrect one is a perturbation by modifying discourse devises or structures in the correct candidates.
⢠Disco-Bench Training Data: We introduce a large-scale (400G), document-level data in Chinese and English, which is in the same literature domain with the benchmark. The training data enables ï¬ne-grained pretraining to better model discourse information required by the benchmark. | 2307.08074#4 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 5 | 2
the lack of task specificity confuses LLMs when generating a software system in one step. Granular tasks in software development, such as analyzing user/client requirements and selecting programming languages, provide guided thinking that is absent in the high-level nature of the task handled by LLMs. Secondly, the absence of cross-examination in decision-making poses significant risks [9]. Individual model instances propose a diverse range of answers, throwing the requirements to debate or examine the responses from other model instances to converge on a single and more accurate common answer [12], such as code peer-review and suggestion feedbacks. | 2307.07924#5 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 5 | To use the quantized LLMs in an effective way, it is important to understand what level of per- formance can be attained in varied bit precision, e.g., what is the lowest bit precision for quantiza- tion to achieve decent performance on a speciï¬c task? More recently, several studies have con- ducted comprehensive evaluation experiments on the impact of model quantization on the perfor- mance of LLMs (Yao et al., 2023b; Dettmers and Zettlemoyer, 2022). However, they mainly analyze the general performance of quantized LLMs (e.g., language modeling), lacking a deep investigation into LLMâs abilities on complex tasks.
In this work, we focus on examining the per1There is still no consensus on the existence of emergent abilities, due to the lack of continuity in evaluation metrics and model sizes in the empirical study (Wei et al., 2022). It is also known that small models can possess some emergent abilities with special adaptation. Despite that, we still use this term to emphasize the superior performance of LLMs. | 2307.08072#5 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 5 | To better understand challenges posed by Disco-Bench, we conduct experiments on a variety of state-of-the-art models, including Transformer, pretrained models as well as large language models (LLMs). Table 2 shows the overall performance. We found that these tasks display different levels of difï¬culty, resulting in different behaviors and performances across models. Furthermore, the ï¬ne-grained pretraining based on the document-level and discourse-rich Disco-Bench data improves performances particularly on cohesive translation and coherent generation. However, the best models still achieve a fairly low absolute score, highlighting the difï¬culty of modeling discourse. There are three main contributions in this work: ⢠Challenging Tasks: We propose a diverse set of discourse-aware tasks to evaluate monolingual and cross-lingual modelsâ ability to understand and generate texts. ⢠Considerable Resources: We build and release a variety of discourse-aware resources, including benchmarking datasets, diagnostic test suite, large-scale pretraining corpus and discourse-aware pretrained models.
2
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
@ Existing Pretrained Models @ in-domain Pretrained Models @ Large Language Models
@ Plain Models BeExisting Pretrained Models @ in-domain Pretrained Models G@ Large Language Models | 2307.08074#5 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 6 | To address the aforementioned challenges, we âestablishâ a virtual chat-powered software technology company â ChatDev. It follows the classic waterfall model [3] and divides the process into four phases: designing, coding, testing, and documenting. At each phase, ChatDev recruits multiple "software agents" with different roles, such as programmers, reviewers, and testers. To facilitate effective communication and collaboration, ChatDev utilizes a proposed chat chain that divides each phase into atomic subtasks. Within the chat chain, each node represents a specific subtask, and two roles engage in context-aware, multi-turn discussions to propose and validate solutions. This approach ensures that client requirements are analyzed, creative ideas are generated, prototype systems are designed and implemented, potential issues are identified and addressed, debug information is explained, appealing graphics are created, and user manuals are generated. By guiding the software development process along the chat chain, ChatDev delivers the final software to the user, including source code, dependency environment specifications, and user manuals. | 2307.07924#6 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 6 | formance of quantized LLMs on solving complex tasks, to explore the impact of quantization on the emergent abilities of LLMs. As demonstrated in previous studies (Wei et al., 2022), there exists a strong dependency between emergent abilities and parameter scale. It is curious whether the emergent abilities would vanish under the setting of low-bit precision though the model size remains to be the original scale. In addition, it is also important to ex- plore the factors (e.g., the model structure) that po- tentially affect the emergent abilities. Furthermore, we are also interested in the potential approaches to enhance the performance of the low-bit models.
Specially, we aim to answer the following two questions: (1) Do emergent abilities exist in quantized large language models? If so, what level of performance it can achieve? (2) How to enhance the performance of low-bit models? To answer the two key questions, we assess three key abilities, namely in-context cearning (ICL), chain-of-thought reasoning (CoT), and Instruction- Following ability (IF), on a collection of LLaMA models (Touvron et al., 2023) which are widely used as the backbone models. We conduct exten- sive empirical experiments, aiming to gain a better understanding of the model performance of quan- tized LLMs. | 2307.08072#6 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 6 | @ Existing Pretrained Models @ in-domain Pretrained Models @ Large Language Models
@ Plain Models BeExisting Pretrained Models @ in-domain Pretrained Models G@ Large Language Models
(a) Results of our benchmark. (b) Results of our diagnostic test suite.
Figure 2: The overall performance of various models on our discourse-aware benchmark and diagnostic test suite. For each model category, the highest scores are selected to represent the overall performance level.
⢠Comprehensive Comparisons: We systematically compare many advanced pretrain- ing methods on the benchmark, and identify current challenges in discourse modelling for future exploration.
# 2. Preliminary
# 2.1 Discourse
A discourse is an instance of language use whose type can be classiï¬ed on the basis of such factors as grammatical and lexical choices and their distribution in main versus supportive materials, theme, style, and the framework of knowledge and expectations within which the addressee interprets the discourse (Elson and Pickett 1983; Crystal 1985; Hanks 1987; Longacre 1990). A discourse contains seven fundamental properties including cohesion, coherence, intentionality, acceptability, informatively, situationality and intertextuality (De Beaugrande and Dressler 1981). Among them, cohesion and coherence have often been studied in discourse analysis (Sanders and Maat 2006; Xiong et al. 2013). | 2307.08074#6 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 7 | The experiment analyzed all the software produced by ChatDev in response to 70 user requirements. On average, ChatDev generated 17.04 files per software, alleviated potential code vulnerabilities caused by code hallucinations 13.23 times, had a software production time of 409.84 seconds, and incurred a manufacturing cost of $0.2967. Discussions between a reviewer and a programmer led to the identification and modification of nearly twenty types of code vulnerabilities, while discussions between a tester and a programmer resulted in the identification and resolution of more than ten types of potential bugs. In summary, our main contributions are as follows:
⢠We propose ChatDev, a chat-based software development framework. By merely specifying a task, ChatDev sequentially handles designing, coding, testing, and documenting. This new paradigm simplifies software development by unifying main processes through language communication, eliminating the need for specialized models at each phase.
⢠We propose the chat chain to decompose the development process into sequential atomic subtasks. Each subtask requires collaborative interaction and cross-examination between two roles. This framework enables multi-agent collaboration, user inspection of intermediate outputs, error diag- noses, and reasoning intervention. It ensures a granular focus on specific subtasks within each chat, facilitating effective collaboration and promoting the achievement of desired outputs. | 2307.07924#7 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 7 | For the ï¬rst question, we evaluate the LLaMA models at four sizes (i.e., 7B, 13B, 30B, and 65B), examining their performance across a range of pre- cision levels: 2-bit, 4-bit, 8-bit, and 16-bit. Our experiments indicate that 4-bit precision yields the most favorable trade-off between model perfor- mance and memory footprint, achieving superior results with the same amount of allocated total bits. However, all models at different sizes suffer from a severe decline at 2-bit precision. | 2307.08072#7 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 7 | Cohesion. Cohesion occurs whenever âthe interpretation of some element in the dis- course is dependent on that of anotherâ (Halliday and Hasan 1976). The referential cohesion (i.e. anaphora and coreference) and lexical cohesion (i.e. repetition and collocation) are commonly-used cohesive devices. The examples are shown in Figure 3.
Anaphora. It is the use of an expression whose interpretation depends speciï¬cally upon antecedent expression. The anaphoric (referring) term is called an anaphor. Sometimes anaphor may rely on the postcedent expression, and this phenomenon is called cataphora. As shown in Figure 3(a), the pronoun âItâ is an anaphor, which points to the left toward its antecedent âAudiâ. Zero anaphora is a more complex case of anaphora.
3
# Preprint
Preprint
Volume 1, Number 1 | 2307.08074#7 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 8 | ⢠To further alleviate potential challenges related to code hallucinations, we introduce the thought instruction mechanism in each independent chat process during code completion, reviewing, and testing. By performing a ârole flipâ, an instructor explicitly injects specific thoughts for code modifications into instructions, thereby guiding the assistant programmer more precisely.
⢠The experiments demonstrate the efficiency and cost-effectiveness of ChatDevâs automated software development process. Through effective communication, proposal, and mutual examination between roles in each chat, the framework enables effective decision-making.
# 2 ChatDev
Similar to hallucinations encountered when using LLMs for natural language querying [2], directly generating entire software systems using LLMs can result in severe code hallucinations, such as incomplete implementation, missing dependencies, and undiscovered bugs. These hallucinations may stem from the lack of specificity in the task and the absence of cross-examination in decision- making. To address these limitations, as Figure|1|shows, we establish a virtual chat-powered software technology company â ChatDev, which comprises of recruited agents from diverse social identities, such as chief officers, professional programmers, test engineers, and art designers. When presented with a task, the diverse agents at ChatDev collaborate to develop a required software, including an executable system, environmental guidelines, and user manuals. This paradigm revolves around leveraging large language models as the core thinking component, enabling the agents to simulate
3 | 2307.07924#8 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 8 | Regarding the second question, we carefully examine the quantization sensitivity of different model components (or substructures), speciï¬cally attention and feed-forward networks (FFN). In our experiments, we ï¬nd that FFN plays a crucial role in retaining the model performance for low-bit quantization. We also evaluated the effects of out- lier dimensions, which are speciï¬c dimensions that exhibit signiï¬cantly higher values compared to oth- ers in feature activations. We ï¬nd the outlier di- mensions affecting most Transformer layers are primarily responsible for the decline in the quanti- zation performance, and they mainly concentrate on the down projections of FFN. These observations motivate us to design more ï¬ne-grained sub- structure quantization strategies for improving the performance of low-bit models. | 2307.08072#8 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 8 | 3
# Preprint
Preprint
Volume 1, Number 1
___Anaphora___ (a) | Audi | is an automaker that makes luxury cars. | It | was established by August Horch. < » (b) We invited the 1st HK Chief Executive | to our school. Mr. Tung Chee-hwa_ told a story. Repetition (same word/ synonyms) (©) A: Which| dress/frock are you going to wear? B: | will wear my green frock/dress | Collocation (d) | Once uponatime there was an ugly duckling.
Figure 3: Examples of different cohesion devices.
In pro-drop languages such as Chinese and Japanese, pronouns can be omitted to make the sentence compact yet comprehensible when the identity of the pronouns can be inferred from the context. These omissions may not be problems for our humans since we can easily recall the missing pronouns from the context.
Coreference. Two or more expressions (e.g. nouns) in a text refer to the same referent. As the referents point to persons or things in the real world, the coreference relation can exist independently of the context. As shown in Figure 3(b), the noun phrases âHK Chief Executiveâ and âMr. Tung Chee-hwaâ point to the same person, although their surfaces are totally different. | 2307.08074#8 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 9 | 3
the entire software development process, circumventing the need for additional model training and mitigating undesirable code hallucinations to some extent.
# 2.1 Chat Chain
ChatDev employs the widely adopted waterfall model, a prominent software development life cycle model, to divide the software development process into four distinct phases: designing, coding, testing, and documenting. In the designing phase, innovative ideas are generated through collab- orative brainstorming, and technical design requirements are defined. The coding phase involves the development and review of source code, while the testing phase integrates all components into a system and utilizes feedback messages from interpreter for debugging. The documenting phase encompasses the generation of environment specifications and user manuals. Each of these phases necessitates effective communication among multiple roles, posing challenges in determining the sequence of interactions and identifying the relevant individuals to engage with. | 2307.07924#9 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 9 | Furthermore, we study how to enhance the per- formance of quantization models through ï¬ne- tuning. We evaluate the impacts of different ï¬ne- tuning methods executed before and after quan- tization. Our experimental results reveal that parameter-efï¬cient ï¬ne-tuning after quantization can achieve commendable performance with sig- niï¬cantly reduced computational resources. Our approach can ï¬ne-tune a 2-bit LLaMA-65B model on a single NVIDIA A100, surpassing the perfor- mance of a 16-bit LLaMA-13B model on zero-shot MMLU dataset.
# 2 Background
In this section, we introduce the background for emergent abilities and post-training quantization.
Emergent Abilities With the increasing of model parameters and training corpus, LLMs ex- hibit some special abilities that may not be present in small-sized language models, called emergent abilities (Wei et al., 2022). Emergent abilities are an important indication of superior performance of LLMs, which has received much attention in the research community. Following the survey on LLMs (Zhao et al., 2023), we focus on discussing three key emergent abilities, namely in-context learning, chain-of-thought reasoning, and instruc- tion following. Next, we will brieï¬y introduce each ability. | 2307.08072#9 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 9 | Lexical Cohesion. Lexical cohesion refers to the way related words are chosen to link elements of a text. The ârepetitionâ indicates the linking between the same word, or synonyms, antonyms, etc. As shown in Figure 3(c), the synonyms âdressâ and âfrockâ across two sentences are the repetition case. In the âcollocationâ form, related words are typically put together or tend to repeat the same meaning. For example, the phrase âonce upon a timeâ in Figure 3(d) is a collocation case. | 2307.08074#9 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 10 | To address this, we propose a generalized architecture by breaking down each phase into multiple atomic chats, each with a specific focus on task-oriented role-playing involving two distinct roles. Through the exchange of instructions and collaboration between the participating agents, the desired output for each chat, which forms a vital component of the target software, is achieved. An illustration of this process is depicted in Figure 2, where a sequence of intermediate task-solving chats, referred to as a âchat chainâ, is presented. In each chat, an instructor initiates instructions, guiding the dialogue towards task completion, while the assistant follows the instructions, provides suitable solutions, and engages in discussions regarding feasibility. The instructor and assistant cooperate through multi-turn dialogues until they reach a consensus and determine that the task has been successfully accomplished.
The chat chain provides a transparent view of the software development process, shedding light on the decision-making path and offering opportunities for debugging when errors arise, which enables users to inspect intermediate outputs, diagnose errors, and intervene in the reasoning process if necessary. Besides, chat chain ensures a granular focus on specific subtasks within each phase, facilitating effective collaboration and promoting the attainment of desired outputs. | 2307.07924#10 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 10 | ⢠In-Context Learning (ICL) was introduced by GPT-3 (Brown et al., 2020) to solve complex tasks through specially designed prompts. It can effec- tively guide LLMs to generate the intended output for test examples by leveraging natural language instructions and/or task demonstrations, without ne- cessitating additional training or gradient update.
⢠Chain-of-Thought reasoning (CoT) is a spe- cial prompting strategy that tackles intricate tasks that encompass multiple reasoning steps, such as mathematical word problems. It incorporates inter- mediate reasoning steps for each demonstration in the prompt, thus eliciting the capacity of solving complex tasks via step-by-step reasoning.
⢠Instruction Following (IF) refers to the supe- rior ability that a LLM follows human instructions and completes the target task as needed. Though it shares a similar format with ICL by using natural language instructions, it often includes no demonstrations and requires speciï¬c tuning (i.e., instruc- tion tuning) to elicit this ability.
Note that emergent abilities can be deï¬ned on different tasks or settings. We select the three abili- ties for study, mainly because they are widely uti- lized for solving complex tasks. | 2307.08072#10 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 10 | Coherence. Coherence is created referentially, when different parts of a text refer to the same entities, and relationally, by means of coherence relations such as âCauseâ Consequenceâ between different discourse segments. The discourse structure such as RST (Rhetorical Structure Theory, (Mann and Thompson 1988) is usually used to analyze the coherence of a text. RST relations are applied recursively in a text until all units in that text are constituents in a predeï¬ned relation. As shown in Figure 4, the result of such analysis is that RST structure is typically represented as a tree, with one top-level relation that encompasses other relations at lower levels. There are a number of predeï¬ned relations such as âAttributionâ (causality) and âContrastâ (adversative relation), and the leaves are presented as segments/parts of the text.1
# 2.2 Related Work
Evaluation benchmarks are important for developing deep learning models, which enable comparison between different models and probe models for understanding of
1http://www.sfu.ca/rst/index.html.
4
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark | 2307.08074#10 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 11 | Designing Coding Testing Documenting Waterfall Model ¢ © 2 @ . e ¢ CEO cro 9 cro Designer o Reviewer ° cro CEO cro Programmer Programmer Tester Programmer cPO Phase-Level 4 i 1 Chat-Level ) Geg CEO! "cro Programmer ' Programmer Programmer ' 1 CTO CEO Chat Chain {task}â> â> {modality} > I. {language} â> I. {code} I cote) = ia {code} â> in {code} > ie {spec} â> 2 â> {manual} â@ @ $68 @ 8 @ cro Programmer Designer Reviewer Tester CPO Programmer
Figure 2: The proposed architecture of ChatDev consists of phase-level and chat-level components. At the phase level, the waterfall model is used to break down the software development process into four sequential phases. At the chat level, each phase is further divided into atomic chats. These atomic chats involve task-oriented role-playing between two agents, promoting collaborative communication. The communication follows an instruction-following style, where agents interact to accomplish a specific subtask within each chat.
# 2.2 Designing | 2307.07924#11 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 11 | Post-Training Quantization Due to the huge number of parameters, it is often infeasible to con- duct full-tuning on the model parameters. Thus, post-training quantization (PTQ) (Dettmers et al., 2022; Frantar et al., 2022; Yao et al., 2023b) meth- ods are widely used for LLMs. For PTQ methods, they often only rely on small calibration data to tune the quantization parameters, which is very ef- ficient in implementation. In this work, we adopt a popular quantization method, GTPQ (Frantar et al., 2022), to conduct our experiments. Spe- cially, GPTQ employs a layerwise reconstruction loss to minimize the discrepancy between the orig- inal weights (W) and the quantized weights (W) through the optimization of the following objective: arg ming; || WX â wx ||3. It can achieve very promising results for 4-bit quantization on LLMs, and also provides support for lower bit precision for weight quantization. | 2307.08072#11 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 11 | 1http://www.sfu.ca/rst/index.html.
4
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
Telxon Corp. said its president resigned and its Houston work force has been trimmed by 15 %. The marker of computer sysiems said the personnel changes were needed to improve the efficiency of its manufacturing operation. The company said it hasn't named a successor to Ronald Button, the president who resigned. Its Houston work force now totals 230, Elaboration Attribution Attribution the personnel changes were needed
Figure 4: An example of coherence properties represented by RST tree.
speciï¬c linguistic phenomena. Conneau and Kiela (2018) collected SentEval containing several sentence-level classiï¬cation tasks to test the representational power of models. Closely related to this work, DiscoEval (Chen, Chu, and Gimpel 2019) extended these tasks to evaluate discourse-related knowledge in pretrained models. DiscoEval only evaluates sentence encoder with language understanding tasks in English. In contrast, we extend the tasks to a boarder range of NLP tasks, which can evaluate different types of models (e.g. encoder-based BERT, decoder-based GPT, and encoder-decoder based mBART). In addition, our benchmarks cover both Chinese and English. | 2307.08074#11 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 12 | # 2.2 Designing
In the designing phase, ChatDev receives an initial idea from a human client. This phase involves three predefined roles: CEO (chief executive officer), CPO (chief product officer), and CTO (chief technology officer). The chat chain then breaks down the designing phase into sequential atomic chatting tasks, including decisions regarding the target softwareâs modality (CEO and CPO) and the programming language (CEO and CTO).
4
(a) Role Specialization (b) Memory Stream (c) Self-Reflection
g =g ⬠2g ES cons Pseudo Questioner
2 -~ ov, Mut mm We Gamise" are a CTO for Gamise" design... om, âBs = â Ea are a CEO for âdecision- Ea
5 i é E 8 8 2 fs
Figure 3: Three key mechanisms utilized in each chat. Role specialization ensures that each agent fulfills their designated functions and contributes effectively to the task-oriented dialogue. The memory stream maintains a comprehensive record of previous dialogues within the chat, enabling agents to make informed decisions. Self-reflection prompts the assistant to reflect on proposed decisions when both parties reach a consensus without triggering predefined termination conditions. | 2307.07924#12 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 12 | In addition to model weights, activations are also considered for quantization. However, due to the presence of outlier dimensions (Dettmers et al., 2022) in the feature activation values, quantizing activations in low-bit precision is widely acknowl- edged as a challenging task. These outlier dimen- sions exhibit signiï¬cantly higher values compared to others and become particularly prominent as the model scale increases.
# 3 Do Emergent Abilities Exist in Quantized LLMs?
In this section, we aim to investigate the existence of emergent abilities in quantized LLMs, speciï¬- cally focusing on in-context learning (ICL), chain- of-thought reasoning (CoT), and instruction follow- ing (IF). Next we ï¬rst introduce the experimental setup and then present our key ï¬ndings.
# 3.1 Experimental setup
In-Context Learning Test In order to evaluate the ICL ability, we utilize two widely used datasets for evaluating LLMs: MMLU (Hendrycks et al., 2021) and BBH (Srivastava et al., 2022a). MMLU serves as a comprehensive benchmark for assess- ing multi-task knowledge understanding in various | 2307.08072#12 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 12 | GLUE (Wang et al. 2018a) and SuperGLUE (Wang et al. 2019a) included a wider variety of natural language understanding tasks, further examining the capabilities of the models and making the results comparable for multi-task learning. Followed researchers extend the benchmarks to other languages, such as CLUE (Xu, Zhang, and Dong 2020) and LOT (Guan et al. 2022) in Chinese, and XGLUE (Liang et al. 2020) in multiple languages. While these works focus on evaluating inter-sentence information,2 our benchmark evaluates intra-sentence discourse phenomena that cross sentences.
# 3. Disco-Bench Benchmark
To comprehensively evaluate the target models, Disco-Bench covers three types of NLP tasks, including language understanding, translation and generation. We design the benchmarking tasks using the following criteria: (1) our tasks should measure the ability of models to handle discourse phenomena, thus we deï¬ne discourse-related tasks at different levels of difï¬culty; (2) our datasets should contain rich discourse phenomena, thus we build document-level datasets with whole contexts extracted from literary texts. To this end, we introduce nine discourse-aware tasks, which are representative of challenging NLP tasks, and easily applicable to real-world situations. | 2307.08074#12 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 13 | Role Assignment System prompts/messages are used to assign roles to each agent during the role-playing process. In contrast to other conversational language models, our approach to prompt engineering is restricted solely to the initiation of role-playing scenarios. The instructor is denoted as PI, while the assistantâs system prompt/message is denoted as PA. These prompts assign roles to the agents before the dialogues begin. Let LI and LA represent two large language models. Using the system message, we have I â LPI A , which serve as the instructor and assistant agents (Figure 3(a)), respectively. In our framework, the instructor initially acts as a CEO, engaging in interactive planning, while the assistant assumes the role of CPO, executing tasks and providing responses. To achieve role specialization, we employ inception prompting [23], which has proven effective in enabling agents to fulfill their roles. The instructor and assistant prompts encompass vital details concerning the designated task and roles, communication protocols, termination criteria, and constraints aimed at preventing undesirable behaviors (e.g., instruction redundancy, uninformative responses, infinite loops, etc.). | 2307.07924#13 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 13 | domains, encompassing ï¬elds such as mathematics, computer science, humanities, and social science. Additionally, BBH is a challenging variant of Big- Bench (Srivastava et al., 2022b), which is proposed to concentrate on investigating the currently unsolv- able tasks of LLMs. Then we conduct evaluations on the MMLU (i.e., ï¬ve- and zero-shot) and BBH (i.e., three- and zero-shot) datasets, respectively.
Chain-of-Thought Reasoning Test To assess the CoT ability of the model, we employ the widely used GSM8K dataset. GSM8K is a reasoning dataset comprising 8K problems that collectively evaluate the modelâs ability in arithmetic reasoning and the composition of mathematical steps. Fol- lowing the methodology introduced in Fu et al. (2023), we conduct evaluations using a few-shot setting, where demonstrations are provided. Each demonstration is formatted as <input, CoT, out- put>, allowing it to elicit the modelâs capability to reason and generate coherent chains of thought. | 2307.08072#13 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 13 | Accordingly, the benchmark contains a collection of nine datasets in Chinese and/or English: eight of which are newly created, and one is expanded based on existing data. Table 1 lists the details of the benchmark, where each task contains training, validation,
2LOT (Guan et al. 2022) evaluates modelsâ abilities to model long text but ignores discourse information.
5
# Preprint
Preprint
Volume 1, Number 1
Table 1: An overview of our discourse-aware evaluation benchmark, covering language understanding, translation and generation. All datasets consist of document-level texts in the literature domain, which are rich in discourse phenomena. Eight of them are newly created by us and one is expanded based on existing corpus (i.e. MRC). It covers three languages: English (en), Modern Chinese (mzh/zh) and Classical Chinese (czh). We report commonly-used evaluation metrics. â#â means the number of instances (e.g. sentences, pairs or documents). âTestâ represents both validation and testing sets. | 2307.08074#13 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 14 | Memory Stream The memory stream [32] is a mechanism that maintains a comprehensive record of an agentâs previous dialogues, assisting in subsequent decision-making in an utterance-aware manner. Formally, the instructorâs message at time t is denoted as It, the assistantâs message as At, and the related decisions as St. Equation 1 encapsulates the collection of conversational messages up to time t.
(1) where Ï represents a LLM-based decision extractor which can be implemented via communication protocol detection or self-reflection (detailed below). In the succeeding time step t + 1, the instructor leverages the historical dialogue message set Mt to impart a fresh instruction, It+1, which is then conveyed to the assistant along with Mt, as illustrated in Figure 3(b). The assistant responds with a solution or message, denoted as At+1 in Equation 2:
It+1 = A(Mt, St) At+1 = I(Mt, It+1, St) (2)
Following the acquisition of the solution At+1 in response to the instruction It+1, the message stream undergoes an update process utilizing Equation 3:
Mt+1 = Mt ⪠(It+1, At+1) St+1 = St ⪠Ï(It+1, At+1) (3) | 2307.07924#14 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 14 | Instruction Following Test To evaluate instruc- tion following ability, we refer to the proposed ap- proach in Vicuna (Chiang et al., 2023) and conduct an automatic evaluation based on GPT3.5 (abbre- viated as AutoEval). Speciï¬cally, we utilize the dataset in Vicuna that comprise 80 questions span- ning 8 distinct categories. Then each model is tasked with generating a response for every ques- tion in the dataset.
Quantization Settings To evaluate the perfor- mance of the aforementioned emergent abilities of quantization, we conduct a series of comprehen- sive experiments. Our tests are conducted based on the implementation of GPTQ-for-LLaMA 2, which only focus on weight quantization and encompass all model components (i.e., query, key, value, out- put projection matrices in attention module and gate, up, down projection matrices in the feed- forward networks). For model size, we include a collection of LLaMA models of 7B, 13B, 30B, and 65B parameters. We consider quantization at 2-bit, 4-bit, 8-bit, and a non-quantized (16-bit) precision. These diverse conï¬gurations aim to thor- oughly evaluate the impact of different quantization settings on model performance. | 2307.08072#14 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 14 | Task Metric Dataset Language # Train # Test Domain SI F1, EM Understanding Task 48.0K 17.5K novel zh ZPR F1, P, R 2.2M 8.1K mixed zh MRC Acc. 26.4K 6.5K composition mzh, czh NT CCT PT d-BLEU, BLEU, TER, MET. COM. Translation Task 1.9M 1.3K 778.1K 5.3K 47.1K 2.7K novel dianji poetry zhâen czhâmzh zhâen TE BLEU, PPL Generation Task 2.4M 10K book en TI TC PPL, Dist, BERTscore 233K 233K 10K 10K book book zh zh
and testing sets. In the following sections, we mainly introduce task deï¬nition, data construction, and evaluation methods.
# 3.1 Language Understanding Tasks | 2307.08074#14 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 15 | Mt+1 = Mt ⪠(It+1, At+1) St+1 = St ⪠Ï(It+1, At+1) (3)
We establish communication protocols through prompts. For example, an ending message satisfying specific formatting requirements (e.g., â<MODALITY>: Desktop Applicationâ) is generated when both parties reach a consensus. The system monitors communication to ensure compliance with the designated format, allowing for the conclusion of the current dialogue.
Self-Reflection Occasionally, we have observed dialogues where both parties reach a consensus but do not trigger the predefined communication protocols as termination conditions. In such cases,
5
H im H methods. ' Programmer
' ' Implement Junimplemented?, H H | fcane. init (| | feano init.d] : | - | pogene =e re Es ey
(a) Naive Instruction in Coding (b) Thought Instruction in Coding (c) Naive Instruction in Testing (d) Thought Instruction in Testing | 2307.07924#15 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08074 | 15 | and testing sets. In the following sections, we mainly introduce task deï¬nition, data construction, and evaluation methods.
# 3.1 Language Understanding Tasks
Language understanding aims to analyze what human language means, containing various tasks such as natural language inference and story comprehension. Discourse is one of the fundamental problems for understanding models. It is difï¬cult to determine the referents of pronouns and deï¬nite noun phrases, and understand elliptical sentence fragments, as well as a host of other long-range language phenomena that have not even been adequately characterized much less conquered (Bates 1995). As shown in Figure 5, we classify tasks into three difï¬culty levels according to the length of contexts and the amount of knowledge, required for discourse modeling. | 2307.08074#15 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 16 | (c) Naive Instruction in Testing
Figure 4: The thought instruction mitigates code hallucinations during the coding and testing phases. Instead of providing generic instructions, thought instruction involves role swapping to inquire about unimplemented methods or explain feedback messages caused by bugs. This step allows for a clearer understanding of the existing code and identifies the specific gaps that need to be addressed. By gaining this awareness, the roles can then switch back, and the instructor can provide more specific instructions to guide the programmer accurately.
we introduce a self-reflection mechanism, which involves extracting and retrieving memories. To implement this mechanism, we enlist a âpseudo selfâ as a new questioner and initiate a fresh chat. The pseudo questioner informs the current assistant of all the historical records from previous dialogues and requests a summary of the conclusive information from the dialogue, as shown in Figure 3(c). This mechanism effectively encourages the assistant to reflect upon the decisions proposed and discussed during the dialogue.
# 2.3 Coding | 2307.07924#16 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 16 | Models LLaMA-7B LLaMA-13B LLaMA-30B LLaMA-65B Precision 16-bit 8-bit 4-bit 2-bit 16-bit 8-bit 4-bit 2-bit 16-bit 8-bit 4-bit 2-bit 16-bit 8-bit 4-bit 2-bit MMLU (Acc) 5-shot 0-shot 35.2 29.2 33.7 28.4 34.2 31.0 3.8 2.3 47.0 41.4 46.3 40.5 45.9 39.0 14.8 4.9 58.4 53.7 57.9 54.2 57.3 53.7 26.1 3.7 - - - - 63.0 57.1 22.6 9.0 BBH (Acc) 0-shot 17.3 17.2 18.8 0.4 20.9 21.1 19.8 4.2 19.5 19.9 18.3 3.8 - - 21.9 1.0 3-shot 31.0 31.3 30.8 2.7 36.6 37.2 36.6 18.1 39.4 39.4 40.2 25.3 - - 42.1 24.0 GSM8k (Acc) 13.1 13.5 12.2 0.0 16.4 | 2307.08072#16 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 16 | SI (Speaker Identiï¬cation). Given a paragraph that may contain an utterance and the surrounding context, SI aims to identify the corresponding speaker(s) for the utterance or the content within quotation marks if no speaker exists. To archive this goal, models need to examine the existence of quotes, recognize named entities or phrases that can serve as speakers, and resolve coreference. We construct the dataset that contains 66K instances based on eighteen Chinese novels. Unlike previous SI datasets such as P&P (He, Barbosa, and Kondrak 2013) in which all speakers are entities, speakers in our dataset can also be phrases, pronouns, or multi-entities. The macro-averaged F1 and exact match (EM)
6
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
can be used as the evaluation metrics following standard extractive machine reading comprehension tasks (e.g. (Rajpurkar et al. 2016)). | 2307.08074#16 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 17 | # 2.3 Coding
The coding phase involves three predefined roles: CTO, programmer, and art designer. The chat chain decomposes the coding phase into sequential atomic chatting tasks, such as generating complete codes (CTO and programmer) and devising a graphical user interface (designer and programmer). Based on the main designs discussed in the previous phase, the CTO instructs the programmer to implement a software system using markdown format. The programmer generates codes in response and extracts the corresponding codes based on markdown format. The designer proposes a user-friendly graphical user interface (GUI) that uses graphical icons for user interaction instead of text-based commands. Then, the designer creates visually appealing graphics using external text-to-image tools [35], which the programmer incorporates into the GUI design using standard toolkits. | 2307.07924#17 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 17 | 39.4 40.2 25.3 - - 42.1 24.0 GSM8k (Acc) 13.1 13.5 12.2 0.0 16.4 16.5 15.6 0.0 34.7 34.7 35.4 0.2 - - 48.5 0.8 AutoEval 1121/1134 1092/1335 1058/1330 607/1263 1084/1335 1084/1336 1119/1321 635/1258 1142/1317 1116/1325 1120/1325 630/1198 - - 1171/1319 658/1309 WikiText Mem. (GiB) 13.9 7.9 4.8 3.2 26.6 14.8 8.6 5.5 65.4 35.3 20.0 12.2 - - 38.2 22.9 (PPL) 5.7 5.7 5.8 3937.9 5.1 5.1 5.2 142.6 4.1 4.1 4.2 25.1 - - 3.9 77.8 Tokens/s 33.032 30.833 31.317 33.266 24.968 17.754 18.139 18.422 16.596 8.187 8.371 8.649 - - 4.793 4.826 | 2307.08072#17 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 17 | ZPR (Zero Pronoun Recovery). ZPR aims to recover omitted pronouns in terms of position and form, according to its anaphora information in the given sentence (Yang and Xue 2010; Wang et al. 2018b,c, 2019b; Zhang et al. 2019b; Song et al. 2020). Figure 5 shows an example, where the omitted pronoun â她 (She)â can be recovered according to its anaphora âè²æ¯ (Phoebe)â. The BaiduKnows is a widely-used Chinese ZPR corpus, which contains only 5K human-annotated sentences extracted from a Q&A forum (Zhang et al. 2019b). The insufï¬cient data limits the investigation of model performance on ZPR. Inspired by Wang et al. (2016), we automatically built a large-scale training set from Chinese-English movie subtitles using word alignments. For a clean test set, we hire experts to manually annotate 8K sentences covering ï¬ve domains (i.e. 1.7K novel, 2.2K movie subtitle, 1.2K Q&A forum, 1.6K news, and 1.5K resume). The label set contains | 2307.08074#17 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 18 | Code Management To handle complex software systems, ChatDev utilizes object-oriented pro- gramming languages like Python. The modularity of object-oriented programming allows for self- contained objects, aiding troubleshooting and collaborative development. Reusability enables code reuse through inheritance, reducing redundancy. We introduce the âversion evolutionâ mechanism to restrict visibility to the latest code version between roles, discarding earlier code versions from the memory stream. The programmer manages the project using Git-related commands. Proposed code modifications and changes increment the software version by 1.0. Version evolution gradually elimi- nates code hallucinations. The combination of object-oriented programming and version evolution is suitable for dialogues involving long code segments.
Thought Instruction Traditional question answering can lead to inaccuracies or irrelevant informa- tion, especially in code generation, where naive instructions may result in unexpected hallucinations.
6 | 2307.07924#18 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 18 | Table 1: Evaluation results on MMLU, BBH, GSM8k and AutoEval of the model variants in the LLaMA family. The results of the LLaMA-65B model at 16-bit and 8-bit precisions are not included due to memory constraints on a single GPU.
(eo f- âe Lbit a Sbit = +bit â. 16-bit 50 150 «250 «350 «450 Total bits
ae # âAccuracy = Lbit = Sbit = +bit â. 16-bit 50 150 «250 «350 «450 Total bits
âAccuracy ec 2 Loe = Lbit = Sbit = +bit â. 16-bit 50 150 «250 «350 «450 Total bits
2 S ae Fd & ee eS & § Relative Score âe Lbit = Sbit 5) Oe bit 5 16-bit = nm 50. 150-250 «350-450 Total bits
(a) MMLU (5-shot) (b) BBH (3-shot) (c) GSM8K (CoT) (d) AutoEval | 2307.08072#18 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 18 | novel, 2.2K movie subtitle, 1.2K Q&A forum, 1.6K news, and 1.5K resume). The label set contains 30 Chinese pronouns according to person, number, and gender (as shown in Table 12). The (zero) anaphora resolution is an alternative task on discourse understanding, which aims to identify the antecedent of a referential (zero) pronoun (Kong and Zhou 2010; Mitkov 2014). However, we did not consider this task for two reasons: (1) more than 50% zero pronouns are non-anaphoric which can not be modelled in the resolution task (Rao et al. 2015); (2) different from previous benchmarks such as OntoNotes and CLUEWSC2020 which mainly focus on explicit pronouns, while ZPR considers implicit pronouns which are complementary to each other. We follow common practice to use micro F1, precision and recall as the evaluation metrics. | 2307.08074#18 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 19 | Thought Instruction Traditional question answering can lead to inaccuracies or irrelevant informa- tion, especially in code generation, where naive instructions may result in unexpected hallucinations.
6
This issue becomes particularly problematic when generating code. For instance, when instructing the programmer to implement all unimplemented methods, a naive instruction may result in hal- lucinations, such as including methods that are reserved as unimplemented interfaces. To address this, we propose the âthought instructionâ mechanism, inspired by chain-of-thought prompting [44]. It involves explicitly addressing specific problem-solving thoughts in instructions, akin to solving subtasks in a sequential manner. As shown in Figure 4(a) and 4(b), thought instruction includes swapping roles to inquire about which methods are not yet implemented and then switching back to provide the programmer with more precise instructions to follow. By incorporating thought in- struction, the coding process becomes more focused and targeted. The explicit expression of specific thoughts in the instructions helps to reduce ambiguity and ensures that the generated code aligns with the intended objectives. This mechanism enables a more accurate and context-aware approach to code completion, minimizing the occurrence of hallucination and resulting in more reliable and comprehensive code outputs.
# 2.4 Testing | 2307.07924#19 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 19 | Figure 1: Performance comparison of quantized models under varied memory costs. For AutoEval, the term âRelative Scoreâ denotes the score ratio between quantized models and GPT3.5. The x-axis denotes the total number of bits after quantization.
we compare the responses generated by two models side-by-side and acquire a âscoreâ for each model by GPT3.5. In addition, to quantify the memory cost, we follow Dettmers and Zettlemoyer (2022) and calculate the total (model) bits by multiply- ing the total number of parameters with the actual number of representation bits.
inal performance (i.e., 16-bit ï¬oating-point num- ber). However, a signiï¬cant decline is observed when employing 2-bit quantization, with results ap- proaching near-random levels, e.g., around 0.25 in 4-choice classiï¬cation tasks for MMLU and BBH and 0.0 for GSM8K. It indicates that 4-bit quanti- zation can effectively retain emergent abilities on these test datasets.
# 3.2 Results and Analysis
In this part, we present the experimental results and the corresponding analysis. | 2307.08072#19 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 19 | MRC (Machine Reading Comprehension). The goal of MRC is to answer questions based on the understanding of its meaning given an unstructured text (Liu et al. 2019a; Zeng et al. 2020). We collected the Haihua2021 corpus, which contains 8K articles extracted from reading comprehension tests in primary/high school examinations.3 Each article is followed by at least one question with 2â¼5 choices and one correct answer. We manually create 2K articles as an additional supplement. Different from previous benchmarks based on Wikipedia texts (Cui et al. 2019) or Chinese idioms (Zheng, Huang, and Sun 2019), the Haihua2021 corpus is in the literary domain (i.e. modern/ancient composition and poetry) that contains rich discourse phenomena. Different from the C3 benchmark (Sun et al. 2020) where problems are collected from Chinese-as-a-second- language examinations, this dataset is extracted from more challenging examinations designed for native speakers. Considering the average length of texts, the Haihua2021 corpus is also more challenging than C3 (i.e. the length ratio is 753:117).
# 3.2 Language Translation Tasks | 2307.08074#19 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 20 | # 2.4 Testing
Even for human programmers, there is no guarantee that the code they write on the first attempt is always error-free. Rather than discarding incorrect code outright, humans typically analyze and investigate code execution results to identify and rectify implementation errors [8]. In ChatDev, the testing phase involves three roles: programmer, reviewer, and tester. The process consists of sequential atomic chatting tasks, including peer review (programmer and reviewer) and system testing (programmer and tester). Peer review, or static debugging, examines source code to identify potential issues. System testing, a form of dynamic debugging, verifies software execution through tests conducted by the programmer using an interpreter. This testing focuses on evaluating application performance through black-box testing.
In our practice, we observed that allowing two agents to communicate solely based on feedback messages from an interpreter does not result in a bug-free system. The programmerâs modifications may not strictly follow the feedback, leading to hallucinations. To address this, we further employ the thought instruction mechanism to explicitly express debugging thoughts in the instructions (Figure 4(c) and 4(d)). The tester executes the software, analyzes bugs, proposes modifications, and instructs the programmer accordingly. This iterative process continues until potential bugs are eliminated and the system runs successfully. | 2307.07924#20 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 20 | Overall, the three kinds of emergent abilities seem to be seldom affected with 4-bit quanti- zation. Table 1 presents the test results of the models using 2-bit, 4-bit, 8-bit and 16-bit preci- sion across multiple datasets, including MMLU, BBH for ICL, GSM8K for CoT, AutoEval for IF and WikiText for general language modeling abil- ity. As we can see, the results obtained using 4-bit and 8-bit quantization are very similar to the orig4-bit precision exhibits a favorable trade-off in terms of both total bits and performance. As shown in Table 1, it can be observed that 4-bit quan- tization offers a notable reduction in memory cost. To further examine the relation between model per- formance and resource usage, we follow Dettmers and Zettlemoyer (2022) to introduce the measure of total bits by multiplying the number of the pa- rameters and the bits, and report the test results in Figure 1 by varying the number of total bits. s From the four accuracy curves corresponding to different bit precision, we can see that 4-bit precision consistently exhibits higher model accuracy under the same amount of total model bits. Thus, 4-bit quantization is recommended to be used for a favorable balance between memory cost and model performance in practice. | 2307.08072#20 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 20 | # 3.2 Language Translation Tasks
Language translation is a sequence-to-sequence generation task to translate text from one language to another. Discourse information is important for document-level translation to produce cohesive and coherent translations (Wang et al. 2017; Bawden et al. 2018). As shown in Figure 6, we design three translation tasks of increasing hardness, which differ in the conciseness of source sentences in Chinese. The more concise the Chinese text, the more discourse information is needed for translation. There are a number of evaluation metrics for measuring general performance of MT systems. BLEU is the most
3https://www.biendata.xyz/competition/haihua_2021.
7
# Preprint
Preprint
Volume 1, Number 1 | 2307.08074#20 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 21 | In cases where an interpreter struggles with identifying fine-grained logical issues, the involvement of a human client in software testing becomes optional. ChatDev enables the human client to provide feedback and suggestions in natural language, similar to a reviewer or tester, using black-box testing or other strategies. ChatDev, based on human input, can understand and utilize this feedback to refine the software system.
# 2.5 Documenting
After the designing, coding, and testing phases, ChatDev employs four agents (CEO, CPO, CTO, and programmer) to generate software project documentation. Using large language models, we leverage few-shot prompting [5] with in-context examples for document generation. The CTO instructs the programmer to provide configuration instructions for environmental dependencies, resulting in a document like requirements.txt. This document allows users to configure the environment independently. Simultaneously, the CEO communicates requirements and system design to the CPO, who generates a user manual.
# 3 Experiments | 2307.07924#21 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 21 | The scaling effect depends on speciï¬c tasks, and increasing the model scale beneï¬ts the CoT task the most. We conducted an investigation, as de- picted in Figure 1, to examine the impact of scaling the total number of bits on the performance of a low-bit model across multiple tasks. Overall, our analysis reveals that for the 2-bit precision, increas- ing the total bits (i.e.,a larger model size) does not yield substantial improvements, especially for MMLU and GSM8K, as the obtained outcomes do not exhibit superiority over random scores (i.e., 0.25 on MMLU and 0.0 on GSM8K). Indeed, it is still a challenging task to effectively mitigate the errors resulting from quantization in 2-bit models. For 4-bit (or above) precision models, we observe notable improvements on the CoT tasks when in- creasing the total bits, which are not that signiï¬cant for ICL test. Further, for IF test, a small model scale can be sufï¬cient to achieve very good perfor- mance in our test experiments3. | 2307.08072#21 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 21 | 3https://www.biendata.xyz/competition/haihua_2021.
7
# Preprint
Preprint
Volume 1, Number 1
(qmaseeeetTs savemvor) | âARIF? "BHA, Out: | Speaker=fAH â18, #A. â â H H é si : toe : : & : â ZPR } wire of 3 8 inp: | B: FPAILORO? Ent Wort Out: | B: FHPALMRE? A: fh. â [eae _ Aeezesnmnennsmremmme) â B: FHPAILOLO? ] (A) Bassi; (B) BRAK; (C) ME; (D) LLL. WAM T ME? | Voltional Result MRC ROMS RRIE TR, AT A. wooe out: [Answer (@) Heth | RHCEHT RABAT, * i Discourse Context and Features Task Description | 2307.08074#21 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 22 | # 3 Experiments
Our experimental setup employs the âChatGPT-turbo-16kâ version of ChatGPT to simulate multi- agent software development. The language model temperature is set to 0.2 for controlled generation. In the coding phase, we allow a maximum of 5 attempts for code completion. The reviewer is permitted 5 chats to propose modifications, and a maximum of 5 software system tests are conducted in the testing phase. For Python-based systems, we use Python 3.8.16 as the interpreter for testing. Camel [23] has curated an instruction-following dialogue dataset, which spans across 20 programming languages, 50 domains, and 50 tasks per domain. From this extensive task set, we randomly selected
7
Human Dependencies: numpy... H â â â_ #Gomoku Game {Introduction {4 instalation Reading Running +44 Main Features Installing
Figure 5: The documenting phase involves generating relevant documents, such as external depen- dency specifications and user instructions. The user manual provides comprehensive information about the softwareâs technical architecture, installation instructions, and features, serving as a valuable resource for users. Once the dependencies are installed, a human client can execute the software using a suitable interpreter.
70 tasks1, including both specific and relatively abstract cases, to serve as the basis for analysis in our ChatDev software development. | 2307.07924#22 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 22 | Low-bit quantization performance beneï¬ts from the demonstrations in ICL tests. For complex tasks, we can provide few-shot demon- strations for improving the model performance. To examine this, in Table 1, we also present the results with few-shot demonstrations for ICL. We can ob- serve a notable advantage of the ï¬ve-shot setting compared to the zero-shot setting, especially for 2- bit precision on LLaMA-30B (i.e., 26.1 vs. 3.7). It suggests that the low-bit quantization performance of LLMs can be largely improved when appropri- ate demonstrations are utilized. However, such an improvement is not signiï¬cant for 2-bit precision in LLaMA-7B (i.e., 3.8 vs. 2.3), which indicates that the parameter scale must reach a certain level for this ability.
For CoT tests, extreme 2-bit quantization re- quires a large model scale. From Table 1, we ï¬nd that the CoT ability for 2-bit precision no more exists for 7B and 13B models on our test datasets, since they both get 0.0 accuracy on GSM8K while 30B achieves 0.2. It suggests a sufï¬ciently large
3We plan to conduct evaluation experiments on IF at a larger scale. | 2307.08072#22 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 22 | Figure 5: Illustration of the proposed understating tasks in terms discourse properties and task deï¬nition. As seen, SI needs to recognize named entity and resolve coreference. While ZPR demands the further ability to tackle zero anaphora and gender identiï¬cation. MRC is the hardest because it should fully understand coherence (e.g. discourse structure based on temporal relation) apart from cohesion in previous tasks. English translations of example sentences are listed in Table 17.
widely-used one, which measures the precision of n-grams of the MT output compared to the reference, weighted by a brevity penalty to punish overly short translations (Papineni et al. 2002). TER is an error metric for machine translation that messures the number of edits required to change a system output into one of the references (Snover et al. 2006). METEOR incorporates semantic information by calculating either exact match, stem match, or synonymy match (Banerjee and Lavie 2005). Furthermore, COMET is a neural framework for training multilingual MT evaluation models which obtains new SOTA levels of correlation with human judgements (Rei et al. 2020). | 2307.08074#22 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 23 | 70 tasks1, including both specific and relatively abstract cases, to serve as the basis for analysis in our ChatDev software development.
Software Statistics We performed a statistical analysis on the software systems generated by ChatDev. Key metrics, including the total dialogue turns, consumed tokens, software files, image assets, and version updates, were examined. Table 1 presents these metrics, providing valuable insights into the communication-based software development process. It offers a comprehensive overview of ChatDevâs development, covering aspects such as versioning, file composition, code complexity, and development iterations.
Table 1: The statistical analysis of ChatDevâs software development, including minimum (Min), maximum (Max), and average (Avg.) values for various aspects.
Min Max Avg. # Code Files # Asset Files # Document Files # Lines of Source Codes # Lines of Dependencies # Lines of User Manual # Version Updates # Software Re-development 2.00 0.00 4.00 39.00 1.00 31.00 5.00 1.00 8.00 21.00 5.00 359.00 5.00 232.00 42.00 5.00 4.26 8.74 4.04 131.61 2.90 53.96 13.23 1.40 | 2307.07924#23 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 23 | 3We plan to conduct evaluation experiments on IF at a larger scale.
model size is necessary for the CoT ability for 2- bit quantization. In order to further investigate this phenomenon, we conduct a case study analy- sis for LLaMA models with 7B, 13B and 30B on GSM8K test sets and show several test examples in Table 2. From these examples, we can see that, the 7B model was almost incapable of generating correct text outputs, resulting in a garbled output. Though the 13B model could generate response normally but fail to produce the correct reasoning chain. As a comparison, the 30B model succeeds in generating the correct reasoning chain, albeit with inaccurate inference results.
# 4 How to Enhance the Performance of Low-bit Models?
In order to explore the strategies for achieving higher performance with low-bit post-training quantization (PTQ), we next conduct analysis ex- periments to investigate the factors that affect the quantization performance. First, we analyze the quantization sensitivity of ï¬ne-grained model struc- tures. Second, we examine the effects of perfor- mance compensation via model ï¬ne-tuning.
# 4.1 Quantization Sensitivity Analysis
# 4.1.1 Experimental Setup | 2307.08072#23 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 23 | NT (Novel Translation). The signiï¬cant challenges for translating novels are entity consis- tency, anaphora resolution, and lexical choice (Matusov 2019). We build a document-level Chinese-English corpus, which is extracted from web ï¬ctions. Speciï¬cally, we crawl 45,134 chapters in 152 books from web ï¬ction websites, covering 14 genres such as fantasy science and romance. We manually align them at both document and sentence levels. Different from previous document-level MT datasets such as LDC4 and OpenSubtitle5 from the news and movie subtitle domains, ours is the ï¬rst literature-domain MT corpus containing richer linguistic phenomena especially in discourse.
CCT (Classical Chinese Translation). Classical Chinese is a traditional style of written Chinese used in China until the early 20th century, making it different from any modern spoken form of Chinese. Compared with modern Chinese as in novel translation,
4https://www.ldc.upenn.edu. 5https://opus.nlpl.eu/OpenSubtitles-v2018.php.
8
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark | 2307.08074#23 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 24 | The generated software typically includes 2 to 8 code files, with an average of 4.26 files. Asset files, created by the art designer using external tools [35], range from 0 to 21, with an average of 8.74 files. Here are some examples of concise text descriptions through which programmers request the designer to create images, such as âThe text entry field where the user can input their dataâ, âThe background image for the financial dashboardâ, and âThe image representing the player character in the gameâ. The software is accompanied by 4 to 5 document files on average, such as dependency requirements specifications, user manuals, development logs, and software meta information. | 2307.07924#24 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 24 | # 4.1 Quantization Sensitivity Analysis
# 4.1.1 Experimental Setup
As discussed in prior studies (Dettmers et al., 2022; Yao et al., 2023b), different model components (or feature dimensions) might exhibit varied sensitivity to quantization, i.e., different levels of performance degradation. In this part, we mainly focus on low- bit quantization, and set up the following three ex- periments about quantization sensitivity (Table 3): ⢠Component quantization analysis. In this ex- periment, we examine the sensitivity of two major components in the Transformer architecture, i.e., attention layers and feed-forward networks (FFN). Speciï¬cally, we consider evaluating the perfor- mance of two variants denoted as ⬠ATTâ and ⬠FFNâ, where either the attention or FFN com- ponents are preserved at FP16 precision, while the remaining components are quantized into low bits. It aims to analyze the level of performance degra- dation for each kind of model component.
⢠Outlier quantization analysis. As found in prior studies (Dettmers et al., 2022), quantizing large magnitude feature dimensions (called out- liers) can ruin quantization precision, especially when the outliers emerge in all Transformer lay- ers. Thus we ï¬rst sort the outlier dimensions | 2307.08072#24 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 24 | 8
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
rd NT > Ee eee vole Bo â=7"n, | His mind had already flown to a faraway 5 OEE CHT RABAT. piace) F ah? Inp: | ©OROAF, CREATAH. enn word cot i) / | (opnwn GZS DEE Tok, REAR, FONG A a soi EM (, | [ exemrem, one zm; out: | eee | | or iG b © = omited Conectve & 3 éN| ooo, OBAMA, @ = Omitted preposition CoETAeia EES .. eee i OHOMRBR. pr is Kien 8 ' j ( ores, | J ask your lad beneath a tree. a\s Out: ' OnWOFROM. | ask an apprentice under a pine tree. 2 Discourse Context and Features Task Description | 2307.08074#24 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 25 | The software developed by ChatDev typically ranges from 39 to 359 lines of code, with an average of 131.61 lines2. The data suggests that ChatDev tends to produce software with relatively small-scale code. This is partly because the design of object-oriented programming, whose reusability enables code reuse through inheritance, reducing redundancy. We also noted that when the user specified a less specific task, the resulting source code generated by ChatDev tended to be shorter, averaging around 110.97 lines. This is primarily attributed to ChatDev employing high-level logic to fulfill non-specific tasks, often generating code that focuses on providing print information for interface representation. Therefore, we recommend providing ChatDev with specific instructions, such as desired software features, system rules, UI design, and other detailed specifications. By providing
1For exmaple, âImplement a Gomoku game using Python, incorporating an AI opponent with varying difficulty levelsâ or âCreate a Python program to develop an interactive weather dashboardâ. 2This count includes only lines that contain meaningful code, excluding blank lines.
8 | 2307.07924#25 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 25 | Bits Cases Score Question:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. How many letters are in Indras and her sisterâs names? 30B 4 2 Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 4 more letters than 3. 3 + 4 = 7. So her sisterâs name has 7 letters. Indras has 6 letters and her sister has 7 letters. 6 + 7 = 13. The answer is 13 . Answer:Indrasâ name has 6 letters. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So she has 6 - 4 = 4 letters. 4 letters. The answer is 4 . 35.4 0.2 13B 4 2 Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 6 + 4 = 10 letters. The answer is 10 . Answer:The answer is 15. Question: 15 -. The answer is 15 . 15.6 0.0 7B 4 2 Answer:Indras has 6 | 2307.08072#25 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 25 | Figure 6: The illustration of the proposed translation tasks in terms of discourse properties and task deï¬nition. As seen, a variety of elements may be omitted in the Chinese input but should be recalled in English translation. NT mainly deals with zero pronouns while CCT needs to further tackle omitted connective words that are the marker of discourse structure. PT is the most difï¬cult task because even prepositions could be further omitted. English translations of example sentences are listed in Table 17.
classical Chinese texts are extremely concise and compact by often dropping subjects and objects when a reference to them is understood, which require discourse information for information recovery. We construct a document-level Classical-Modern Chinese translation dataset, extracted from Chinese classics across history branch.6 Different from the NiuTrans Classical-Modern corpus7 that has no discourse context, ours maintain the original context. | 2307.08074#25 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 26 | 8
clearer and more specific instructions, users can guide ChatDev to produce more comprehensive and tailored codes that aligns with their specific requirements. The number of environment dependencies, which indicates the external software components required, ranges from 1 to 5, with an average of 2.90 dependencies. ChatDevâs software environment typically includes numpy, matplotlib, pandas, tkinter, pillow, or flask. The user manual for the software consists of 31 to 232 lines, with an average of 53.96 lines. Based on our experience, the user manual commonly covers sections such as Introduction, Quick Install, Main Features, Usage Instructions, etc,.
The number of version updates for the software ranges from 5 to 42, with an average of 13.23 updates. This indicates that the source code undergoes approximately 13 modifications on average, reflecting the collaborative effort among agents in alleviating code hallucination issues throughout the software development process, including code completion, coding, and testing. In exceptional cases where the software fails to pass the maximum number of tests, ChatDev takes proactive measures by engaging in full-scale software re-engineering. In most cases, the software development process involves 1 to 5 development cycles, with an average of 1.40 cycles. | 2307.07924#26 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 26 | 10 letters. The answer is 10 . Answer:The answer is 15. Question: 15 -. The answer is 15 . 15.6 0.0 7B 4 2 Answer:Indras has 6 letters in her name. Her sisterâs name has 4 more letters than half of the letters in Indrasâ name. So her sisterâs name has 6 + 4 = 10 letters. Indras has 6 letters. Her sisterâs name has 10 letters. The answer is 10 . Answer:B. 82. M.H. or 4 weeks.. ungers are the best Cash...4........ 44. 30..Cash.B.Age.B.G.o.just.J.1Age.. 12.2 0.0 | 2307.08072#26 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 26 | PT (Poetry Translation). Poetry translation is regarded as one of the hardest tasks in computational linguistics, or even artiï¬cial intelligence in general (Genzel, Uszkoreit, and Och 2010; Ghazvininejad, Choi, and Knight 2018). Chinese poetry is even more concise than classic Chinese with implicit coherence, which is generally reï¬ected through situational context and contextual context. For example, Chinese poetry does not use any cohesive means, but the semantic is still clear. We build a document-level Chinese Poetry to Modern English translation corpus, covering different types of Chinese poetry (e.g. Shi, Ci, Qu, and Fu) translated by famous translators.
6https://en.wikipedia.org/wiki/Chinese_classics. 7https://github.com/NiuTrans/Classical-Modern.
9
# Preprint
Preprint
Volume 1, Number 1 | 2307.08074#26 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 27 | In our experiments, we effortlessly set up the sandbox environment by directly installing the required software dependencies. Subsequently, we executed the generated software using the main function. Remarkably, approximately 86.66% of the software systems executed flawlessly, showcasing the robustness and reliability of our developed software. However, a small fraction, 13.33% of the software, encountered execution failures. Upon analyzing the failed software creations, we identified two primary contributing factors. Firstly, in 50% of the cases, the failure was attributed to the token length limit of the API. This limitation prevented obtaining the complete source code within the specified length constraint for code generation. Such challenges are particularly evident when dealing with complex software systems or scenarios requiring extensive code generation. The remaining 50% of the failed software creations were primarily affected by external dependency issues. These challenges emerged when certain dependencies were either unavailable in the cloud or incorrectly versioned, resulting in conflicts and unavailability of specific application programming interfaces (APIs) in the current version. These external dependency-related issues underscore the significance of meticulous management and coordination of the required software components to ensure smooth execution and functionality. Overall, despite encountering a small percentage of failures, our experimental findings demonstrate the feasibility and effectiveness of ChatDev in generating executable software systems, with the majority of the systems successfully executing. | 2307.07924#27 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 27 | Table 2: Case study for the LLaMA models on GSM8K. Details about more precision and tasks can be found in Appendix A.2. The colors of pink and lime denote the wrong and right prediction respectively. The score denotes the average accuracy over all of the GSM8K test set.
Part Weights Activations Quantization Target all component ¬ ATT ¬ FFN ¬ crucial weights all non-outlier dimensions +top-1 outlier dimension +top-3 outlier dimensions Precision INT2/INT4 INT2/INT4 INT2/INT4 INT2/INT4 INT8 INT8 INT8
Table 3: Experimental settings for quantization sensi- tivity analysis. Since activations are more difï¬cult to be quantized, we adopt 8-bit precision for quantization.
03 Accuracy â*â Footprint 10 0.2 sg 6 O41 4 2 "> a . a Ss
od Accuracy â*â Footprint 18 03 2 02 ; ol are - +o * a Ss
(a) LLaMA-7B-2bit (b) LLaMA-13B-2bit
Figure 2: Impacts of different model components or substructures on MMLU (ï¬ve-shot). The memory foot- print is counted in GiB (in green dotted lines). | 2307.08072#27 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 27 | 9
# Preprint
Preprint
Volume 1, Number 1
To eat healthily, | avoid eating food. | also eat meat. | eat vegetables and fresh fruit. TE . lalso eat meat. In order to eat healthily, | usually avoid Toe 3 Out; | ©ating food high in fat, like French fries. | ° a also eat little meat. | eat a lot of vegetables (es egS EEE SERS TEC and fresh fruit which are full of vitamins. Legend : Inp: | #? DO Dd bd... SITS, MEME bia) Conese Word fa 1 [nexommu7â, wimnotmmeay. | panne ote took : out, | ASSO HH, SAAB MAMA nn 8 BITS, MOERF Liat. an id . ⢠. a _ fal.» Phcohlder (Gaeaee, senmmr, roam.) [aexae RRR, BTR Pi) &. DOB Bd. Tc FABRA, RE RUMMICT MA. eee out, | BASH, -RevemMET MADE. ANE T Bie, ARREST PR. US | fete 7 is, RSET PR. Discourse Context and Features Task Description | 2307.08074#27 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 28 | Duration Analysis We conducted a duration analysis to examine the software production time for different request prompts using ChatDev. The variability in development times across prompts reflects the varying complexity and clarity of the assigned tasks. The graph in Figure 6 provides a visual representation of this distribution. The longest software production duration, represented by the tallest bar on the left side of the graph, was 1030.00 seconds. This extended time was due to extensive dialogue and communication between the reviewer and programmer, leading to a detailed modification scheme. In contrast, the shortest bar on the right end of the graph indicates a minimum software development time of 169.00 seconds. This shorter duration was attributed to the absence of significant bugs and fewer dialogues during coding and testing stages. On average, the development of small-sized software and interfaces using ChatDev took 409.84 seconds, less than 7.00 minutes. In comparison, traditional custom software development cycles, even within agile software development methods, typically require 2 to 4 weeks or even several months per cycle [22; 10]. | 2307.07924#28 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 28 | based on the number of layers they affect and focus on the top-n dimensions. Speciï¬cally, we ï¬rst select the top outlier dimensions in activations (preserved at FP16 precision in the LLM.int8() method (Dettmers et al., 2022)), and quantize those belonging to the top-n dimensions and other non- outlier dimensions to INT8 precision. The results are then compared with the standard LLM.int8() method. This approach enables us to investigate the impacts of outlier feature dimensions in terms of emergent abilities.
⢠Substructure quantization analysis. In existing work, they either study component-level or feature- level impact on quantization performance. In ad- dition, we also empirically ï¬nd that different sub- structures in a component have varied importance for quantized LLMs. For example, as will be dis- cussed in Section 4.1.2, outlier dimensions mainly exist in the down projections of the FFN compo- nents. Thus, we consider more ï¬ne-grained quanti- zation at the substructure level. Specially, crucial substructures in a component are preserved at the FP16 precision level. The results are reported as ⬠| 2307.08072#28 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 28 | Figure 7: The illustration of the proposed generation tasks in terms of discourse properties and task deï¬nition. As seen, discourse structure and main contents have been speciï¬ed in TE, thus the task needs to generate cohesive words. While TI should further consider cohesion relations when generating a whole sentence based on the previous and following ones. TC is the most difï¬cult because it needs to generate more sentences with a uniï¬ed structure. English translations are listed in Table 17.
# 3.3 Language Generation Tasks | 2307.08074#28 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 29 | Dialogue Statistics - In ChatDev, we employed a chat chain mechanism to facilitate software development. Each chat chain represents the production of software for a specific task and consists of multiple multi-utterance chat rounds. During these rounds, agents engage in discussions to address predefined subtasks, such as language choices, proposing solutions, and making final decisions. After completing all subtasks, a chat chain concludes with the development of the software product. For our case study tasks, we analyzed the chat chains and collected statistics, including the total number of utterances and prompt tokens used. These statistics are presented in Table 2.
We noticed occasional instances of repetitive expressions of gratitude in the dialogue, even after reaching a consensus and making decisions. However, this phenomenon does not significantly impact the final outcome. The self-reflection mechanism effectively allows agents to extract decision results and conclusions from the dialogue using text summarization-like abilities. This mechanism helps agents avoid unnecessary dialogue and focus on extracting meaningful information. The
9
1100.00 1000.00 {) duration of each task 900.00 === averaged duration of all tasks 900.00 â fitted curve 700.00 600.00 500.00 400.00 300.00 200.00 Software Production Duration (s) 100.00 0.00
Figure 6: Duration Distribution. The bars in the graph are arranged in descending order, showcasing the distribution of software development runtime for different tasks.
Table 2: The statistical analysis of all dialogues in chat chains. | 2307.07924#29 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 29 | crucial weightsâ, where the crucial weight matrices with high quantization error can be identiï¬ed based on existing quantization algorithms.
# 4.1.2 Results and Analysis
The FFN component is of substantial signiï¬- cance for 2-bit quantization. We conducted test experiments to evaluate the quantization sensitiv- ity of different model components, speciï¬cally at- tention and FFN components. As 4-bit quantiza- tion can retain the original performance while 2-bit models suffer from severe declines, we focus on analyzing the extreme 2-bit case. Results in Fig- ure 2 demonstrate the FFN component exhibits substantial signiï¬cance for 2-bit models. Keeping FFN in FP16 improves LLaMA-7B-2bitâs perfor- mance from 0.038 to 0.225 and LLaMA-13B-2bitâs performance from 0.148 to 0.286. These improve- ments show the importance of FFN components for retaining the performance, which needs speciï¬c consideration under extreme 2-bit quantization.
== 7B 0.4 mm 3B 203 3 0.2 . 4 0.1 m a. gn 00 teE AORâ AOR w | 2307.08072#29 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 29 | # 3.3 Language Generation Tasks
Language generation is a sequence generation task to produce text based on a given context (Reiter and Dale 1997). Generating long and coherent text is an important but challenging task, particularly on lexical cohesion (Wanner 1996; Guan et al. 2021). As shown in Figure 7, we design three representative generation tasks that differ in degrees of freedom. The more open-ended the generation task, the more difï¬cult to generate accurate cohesive devices and discourse structure. There are a number of automatic evaluation metrics for measuring the quality of generated texts. We use two groups of metrics: (1) Reference-based scores BLEU (Papineni et al. 2002) and BERTScore (Zhang et al. 2019a), which measure the lexical and semantic similarities between the generated texts and the ground-truth references respectively. Note that, for open-ended text generation tasks such as TI and TC, reference-based metrics are less reliable because the generated text could be of high quality but different from the ground-truth reference. How to accurately measure the performance of open-ended text generation is still an open question and is beyond the scope of this paper. (2) Dist-n scores (Li et al. 2016) calculate the ratio of distinct n-grams in generated text to evaluate lexical diversity. | 2307.08074#29 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 30 | Table 2: The statistical analysis of all dialogues in chat chains.
Min Max Avg. # Self-Reflection # Utterances # Prompt Tokens # Completion Tokens # Total Tokens 1.00 24.00 11,119.00 3,161.00 15,294.00 4.00 104.00 91,208.00 27,162.00 111,019.00 1.24 45.60 36,902.23 11,567.37 48,469.60
self-reflection number in the dialogue ranges from 1 to 4, with an average of 1.24. In most cases, agents can autonomously conclude the dialogue based on predefined communication protocols. | 2307.07924#30 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 30 | == 7B 0.4 mm 3B 203 3 0.2 . 4 0.1 m a. gn 00 teE AORâ AOR w
m= 7B 0.4 mm 3B 503 3 0.2 . = 0.1 ., | on gute 40â AOR w
== 7B 40.0 mmm 138 > 30.0 z 20.0 & 20, 10.0 » tt go 0 aC) Us AOR w
(a) MMLU (5-shot) (b) GSM8K (CoT) (c) WikiText
Figure 3: Impacts of feature outliers on LLaMA models (7B and 13B). ânon-outlierâ denotes the quantization on all non-outlier dimensions, and â+top-1â and â+top-3â refer to quantization of the top-1 and top-3 outlier dimensions in addition to the non-outlier dimensions. âââ indicates that lower indicators are better. | 2307.08072#30 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 30 | TE (Text Expansion). We deï¬ne a new task, which has been seldom studied previously: given a predeï¬ned text, the goal of TE is to insert appropriate words, phrases, or clauses for adding more details and deepening the meaning, while retaining coherence and cohesiveness. We use a semi-automatic generation method to obtain large-scale training data. The raw data are extracted from English books detailed in Table 5. Speciï¬cally, we
10
Wang et al.
Disco-Bench: A Discourse-Aware Evaluation Benchmark
Table 2: Human evaluation on the benchmark quality. We also report the inter-annotator agreement (in bracket) for the translation and generation tasks.
Task Agreement Task Fluency Adequacy Task Fluency Adequacy SI ZPR MRC 0.76 0.91 0.97 NT 4.9 (0.60) CCT 4.9 (0.65) PT 4.7 (0.63) 4.7 (0.78) 4.9 (0.55) 4.4 (0.69) TE 4.0 (0.51) TI 4.3 (0.63) TC 4.3 (0.63) 4.1 (0.51) 4.4 (0.55) 4.4 (0.55) | 2307.08074#30 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 31 | self-reflection number in the dialogue ranges from 1 to 4, with an average of 1.24. In most cases, agents can autonomously conclude the dialogue based on predefined communication protocols.
On average, a chat chain contains 45.60 utterances, ranging from a minimum of 24 to a maximum of 104. The count of utterances encompasses discussions related to achievability of subtasks, evaluations of generated code quality, feedback on testing, advice for improvements, and the actual writing and generation of software code files and documents. Likewise, we have observed that ChatDev tends to engage in less communication through utterances for abstract tasks compared to specific tasks, averaging around 34.40 utterances. Analysis of the dialogues revealed that during the design and coding stages, agents conducted multiple rounds of discussions to delve into the details of numerous requirements or propose modification suggestions. These discussions aimed to make informed decisions regarding the specific tasks at hand. This phenomenon aligns with real-world practices, where addressing specific tasks often involves more detailed discussions and deliberations. | 2307.07924#31 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 31 | The outlier dimension which affects most of layers is primarily responsible for the perfor- mance degradation. In addition to important components, we continue to analyze the impacts of outlier dimensions on low-bit model performance. As observed in Dettmers et al. (2022) that feature outliers that emerge in all Transformer layers are highly important to model performance, we thus focus on those outliers that affect most of the lay- ers. Specially, we ï¬rst identify the top outlier di- mensions according to the number of layers they affect. Then, we evaluate the impact of top-1 and top-3 outlier dimensions by quantizing them into low bits while keeping other outlier dimensions as FP16. In addition, we also quantize non-outlier di- mensions as in LLM.int8(). The evaluation results of LLaMA-7B and LLaMA-13B are presented in Figure 3. We can see that these top outliers have a signiï¬cant impact on the quantization performance, especially the CoT results and PPL scores. Interest- ingly, LLaMA-13B encounters a more severe per- formance degradation compared to the 7B model by quantizing the top-1 outlier dimension. It | 2307.08072#31 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 31 | use the Stanford Parser8 to produce the syntactic tree of a text, and then manually design some rules to delete the modiï¬er words and phrases in the text. We use the remaining words as the input and predict the dropped modiï¬er. Since some delete operations may produce ill-formed text, we ï¬lter out the training instances if the remaining text has a large perplexity measured by a language model. In order to retain the coherence and meaning of the source document, the expanded parts in the target text tends to be modiï¬er phrases or clauses. More TE examples are detailed in Table 14. The expanded contents are summarized into 5 categories, as shown in Table 13. To evaluate the TE models, we use two metrics: BLEU and PPL. | 2307.08074#31 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
2307.07924 | 32 | We monitored API interactions and token usage during software production in ChatDev. On average, ChatDev requires 36,902.23 prompt tokens, 11,567.37 completion tokens, and a total of 48,469.60 tokens to develop a single software. The average total cost in software production is approximately $0.15693. To determine the overall cost of software development with ChatDev, we also consider the cost of designer-produced images. The average designer cost is $0.1398 per software for each software production involving 8.74 graphics creations on average. Thus, the average software development cost at ChatDev is $0.2967, significantly lower than traditional custom software development companiesâ expenses [18; 21; 31].
Reviewer-Programmer Dialogue Analysis In this section, we delve into the primary exchanges between the reviewer and the programmer, specifically concerning code-related matters during the coding phase. We summarize the reviewerâs evaluations of the programmerâs source code at the coding stage. Figure 7 provides a visual representation of the reviewerâs suggestions in the form of pie charts. As depicted in the figure, the most frequently discussed issue in the reviewer-programmer communication during code review is âmethods not implementedâ (34.85%). This challenge commonly arises in code generation for complex models, where core functionalities often receive
3Based on official API prices for July 2023.
10 | 2307.07924#32 | Communicative Agents for Software Development | Software engineering is a domain characterized by intricate decision-making
processes, often relying on nuanced intuition and consultation. Recent
advancements in deep learning have started to revolutionize software
engineering practices through elaborate designs implemented at various stages
of software development. In this paper, we present an innovative paradigm that
leverages large language models (LLMs) throughout the entire software
development process, streamlining and unifying key processes through natural
language communication, thereby eliminating the need for specialized models at
each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered
software development company that mirrors the established waterfall model,
meticulously dividing the development process into four distinct chronological
stages: designing, coding, testing, and documenting. Each stage engages a team
of "software agents", such as programmers, code reviewers, and test engineers,
fostering collaborative dialogue and facilitating a seamless workflow. The chat
chain acts as a facilitator, breaking down each stage into atomic subtasks.
This enables dual roles, allowing for proposing and validating solutions
through context-aware communication, leading to efficient resolution of
specific subtasks. The instrumental analysis of ChatDev highlights its
remarkable efficacy in software generation, enabling the completion of the
entire software development process in under seven minutes at a cost of less
than one dollar. It not only identifies and alleviates potential
vulnerabilities but also rectifies potential hallucinations while maintaining
commendable efficiency and cost-effectiveness. The potential of ChatDev unveils
fresh possibilities for integrating LLMs into the realm of software
development. Our code is available at https://github.com/OpenBMB/ChatDev. | http://arxiv.org/pdf/2307.07924 | Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.SE, cs.CL, cs.MA | https://github.com/OpenBMB/ChatDev | null | cs.SE | 20230716 | 20231219 | [
{
"id": "2204.06125"
},
{
"id": "2107.03374"
},
{
"id": "2305.13281"
},
{
"id": "2304.03442"
},
{
"id": "2304.05128"
},
{
"id": "2303.17760"
}
] |
2307.08072 | 32 | Interest- ingly, LLaMA-13B encounters a more severe per- formance degradation compared to the 7B model by quantizing the top-1 outlier dimension. It indicates that quantizing important outliers has a more sig- niï¬cant impact on larger models. Another impor- tant ï¬nding is that the outlier dimensions seem to emerge on the special substructure of a component. For example, outliers mainly occur in the down projection of the FFN components for LLaMA-7B. | 2307.08072#32 | Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study | Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs. | http://arxiv.org/pdf/2307.08072 | Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen | cs.CL, cs.AI | 15 pages, 4 figures | null | cs.CL | 20230716 | 20230726 | [
{
"id": "2305.14314"
},
{
"id": "2206.07682"
},
{
"id": "2210.17323"
},
{
"id": "2303.08302"
}
] |
2307.08074 | 32 | TI (Text Inï¬lling). The task aims to predict a text snippet given its surrounding con- text (Zhu, Hu, and Xing 2019). To evaluate the discourse-level model capability, we focus on the sentence inï¬lling task that predicts a missing bridge sentence x0 given two preceding sentences (xâ2 and xâ1) and two subsequent sentences (x1 and x2) (Huang et al. 2020; Cai et al. 2020). We build a new TI dataset by extracting consecutive 5-sentence paragraphs from Chinese web ï¬ctions used in the NT task. To evaluate different models, we take the following automatic metrics: Perplexity (PPL), BLEU (Papineni et al. 2002), BERTscore (Zhang et al. 2019a) and diversity scores (Dist-2/4) (Li et al. 2016). We report degree of diversity by calculating the ratio of distinct 2-grams/4-grams in generated text. | 2307.08074#32 | Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling | Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench. | http://arxiv.org/pdf/2307.08074 | Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu | cs.CL, cs.AI | Zhaopeng Tu is the corresponding author | null | cs.CL | 20230716 | 20230722 | [
{
"id": "2109.05729"
},
{
"id": "1907.11692"
},
{
"id": "2110.06696"
},
{
"id": "2304.02210"
},
{
"id": "2012.11157"
},
{
"id": "1901.00158"
},
{
"id": "2305.10196"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.